The Implications of Weakening or Breaking Encryption
Encryption is fundamental to our daily life. Practically everything we do online makes use of encryption is some form. Access to our financial transactions, health records, government services, and exchanged private messages are all protected by strong encryption.
Encryption is the process of changing the information in such a way as to make it unreadable by anyone except for those possessing special knowledge (usually referred to as a "key"), which allows them to change the information back to its original, readable form.
Not only does encryption protect our sensitive personal data, but also secure communications between computerized medical devices, and in the near future, even autonomous, self-driving vehicles. And it is our important, last line of defense to protect Internet users globally and preserve core principles, such as freedom of expression and personal privacy.
Law Enforcement vs. Encryption
Recently, political and law enforcement leaders in the United States, United Kingdom and India have called for the Tech Industry, as well as private messaging platforms like WhatsApp and social media networks, to redesign their systems to ensure that governments can have access to the users' information and content - even if encrypted.
The debate about encryption, or security versus privacy, is not new. In the 1990s, the governments of the United States and a number of other industrialized countries advocated for weakening encryption, claiming that the widespread use of encryption would prove to be disastrous for law enforcement. And in the last few years, some governments have introduced new controversial encryption laws that compel tech companies to grant law enforcement agencies access to encrypted messages, such as the Australian Telecommunications (Assistance and Access) Act, which passed into law at the end of 2018.
James Comey, the former FBI Director, held a speech in Brookings Institution in October 2014, which was entirely focused on the challenges posed by encryption. "The FBI has a sworn duty to keep every American safe from crime and terrorism, and technology has become the tool of choice for some very dangerous people. Unfortunately, the law hasn't kept pace with technology, and this disconnect has created a significant public safety problem. We call it 'Going Dark,'" he said.
The US Government considers "Going Dark" to involve its inability to access content in private messaging platforms such as WhatsApp, which uses encryption systems that prevent the service provider, e.g., WhatsApp, in this case, from being able to decrypt the contents of messages sent and received.
The world did not "Go Dark." On the contrary, law enforcement agencies now have much better and more effective surveillance capabilities than they had before, as substantially more data — especially metadata — is available for collection and analysis by law enforcement.
Weakening encryption poses grave security risks, and there are four major problems we face. The first is that weakening encryption would force a U-turn from the best practices to make the Internet more secure by default; these practices include end-to-end encryption of private messages between Internet users.
Second, designing an exceptional access system to allow access to private encrypted data in global messaging platforms like WhatsApp would substantially increase the system complexity and vulnerability. Security researchers have always indicated that complexity is the enemy of security.
Designing such exceptional access on a global platform that has billions of users would have to be deployed and tested by literally hundreds of thousands of developers all around the world, making security testing difficult and less effective.
Third, providing governments or law enforcement with exceptional access would create concentrated targets that could attract bad actors. Security credentials that unlock encrypted data would have to be retained by the platform provider, law enforcement agencies, or other trusted third parties. If law enforcement has keys and guaranteed access to everything, then a potential attacker who has gained access to these keys would enjoy the same privilege on a massive, global scale.
Fourth, the position of governments in democratic and industrialized countries on encryption does not recognize that some governments around the world pose varying degrees of threats to the freedom and privacy of individuals. Encryption helps people exercise their internationally-recognized human rights, freedom of speech and expression in countries with authoritarian regimes.
The Way Forward, a Multi-Stakeholder Dialogue
The current polarized debate on the use of encryption to promote security regrettably assumes that solutions must have winners and losers. We should reject such an assumption.
Encryption plays a pivotal role in securing the data at the heart of our modern society and economy, and this effort involves not just tech companies that create products and services, but should also extend to include the millions of Internet users around the world who rely on private messaging platforms, products and services to empower their daily lives, civil society, digital rights activists, the companies that encrypt human resources, sales, or other data, and even to the law enforcement officials who investigate crimes.
With so many interests at stake, it is vital that these discussions about the future of encryption involve all perspectives and stakeholders affected at a national and global level.
Written by Mohamed EL Bashir, Public Policy & Internet Governance Strategist
This past Saturday, a most remarkable thing happened when, for the first time in 26 years, the IANA registrar opted not to renew its registrations for the single-character .COM domain names (SCDN). These names were initially registered in 1993 by the man who was IANA, Jon Postel, and ever since then have been maintained as active registrations by ICANN — until now.
Presumably, there will be a Grace Period followed by a Pending Delete period before the names expire, and, as is likely to be the case, they will cease to exist. The .COM registry agreement prohibits the registry operator from conjuring any of them back into existence — except one, O.COM, which the Second Amendment to the .com Registry Agreement specifies can be released, following an auction process that determines the highest bidder.
But the auction — like the recent .ORG mess and pretty much everything ICANN-related — is a disingenuous fraud and a mockery of the letter and spirit of multistakeholder governance.
Never mind that millions of dollars in auction proceeds from a public interest asset are to be distributed to a non-public list of organizations by a trustee whose selection — and length of tenure — is in the sole discretion of the registry operator. After all, it's much easier just to keep $7.85 and secretly funnel the rest of the money through a hand-picked trustee to a hand-picked set of organizations than to seek a waiver of consumer pricing safeguards from pesky competition regulators at the National Telecommunications and Information Administration (NTIA) and the Justice Department.
Much more sordid, however, have been the bad-faith waivers that ICANN and the registry operator granted unto themselves to ignore standard trademark rights protection mechanisms (RPMs) for new and previously reserved domain names (e.g., Sunrise Period, Trademark Clearinghouse, Priority Access) and VeriSign's commitment — made multiple times over multiple years — to enhanced protections for trademark owners.
The argument has been made that RPMs don't apply to .COM SCDNs and, therefore, that owners of registered trademarks with a clear claim can be denied their property rights. This is self-serving nonsense that, if allowed to prevail, would be a demonstrable failure by ICANN, its Intellectual Property Constituency, and the International Trademark Association (INTA) to safeguard brands and trademark registrants. It is only in the instances where rights-holders cannot be clearly ascertained — or if they are ascertained but forego their claim — that any auction can be contemplated.
Written by Greg Thomas, Founder of The Viking Group LLC
InvenTel makes security cams for cars. It is trying to crack down on Chinese counterfeiters. It brought a prior lawsuit against a wide range of defendants, including GoDaddy. InvenTel voluntarily dismissed GoDaddy from that suit. It brought a second round of litigation involving a new counterfeit site allegedly by the same bad guys, www.hdminorcarnbuy.com, a domain name registered via GoDaddy. Initially, InvenTel claimed GoDaddy hosted the site as well, but it dropped that claim. So the suit against GoDaddy devolves into a simple question: can GoDaddy be liable for counterfeiting activity for registering the domain name?
The answer is no. This is wholly unsurprising because most of these issues were litigated and resolved in the 1990s, making this an old school case. On the plus side, it's a nice reminder that the law hasn't changed in the past two decades.
Federal Trademark Infringement. In the ACPA, Congress provided a safe harbor for domain name registrars (15 U.S.C. § 1114(2)(D)(iii)). This safe harbor hasn't been litigated very often, so this is a rare but otherwise unremarkable opinion applying the safe harbor. The court says:
"The only pleaded basis for GoDaddy's knowledge that the Website would be used to infringe is the Li Defendants' conduct using other websites and the Prior Action. But GoDaddy's domain name registration system is automatic. Therefore, without a warning that the specific URL being registered would be used for an illicit purpose, GoDaddy did not have a "bad faith intent to profit" from the automatic registration of 'www.hdmirrorcambuy.com.' In other words, failing to prevent its computer system from registering the Website does not constitute 'bad faith.' Plaintiff provides no basis for the proposition that GoDaddy must predict which URLs will be used for infringement purposes and proactively stop them from being registered."
To be clear, I don't think this passage supports the inverse proposition, i.e., that GoDaddy would be automatically liable if it had gotten a warning that a domain name was being used for illicit purposes.
State Direct Trademark Infringement. GoDaddy didn't "use" the allegedly counterfeited goods.
State Indirect Trademark Infringement. The Ninth Circuit shut down registrar liability in the 1999 Lockheed v. NSI ruling. "GoDaddy does not control or monitor the instrument of infringement (i.e., the Website)."
Direct Copyright Infringement. As a registrar, GoDaddy doesn't "copy" anything.
Indirect Copyright Infringement. There was no direct copyright infringement taking place when GoDaddy registered the domain name.
Direct Patent Infringement. GoDaddy didn't make, use, or sell the counterfeit goods.
Indirect Patent Infringement. "GoDaddy permitting its computer system to automatically register the Website, even with knowledge of the Prior Action, is not an activity GoDaddy knew would 'cause infringement.' As previously stated, GoDaddy is not obligated to proactively guess which proposed domain names will likely be used for nefarious purposes."
State Consumer Fraud Act. InvenTel wasn't GoDaddy's "consumer."
The court summarizes:
"As to the automatic registration of the Website...that conduct cannot produce direct or contributory intellectual property liability on the facts of this case. GoDaddy did not have the requisite knowledge that the Li Defendants would use the Website to infringe on InvenTel's intellectual property rights when it engaged in the only conduct at issue — providing domain name registration services. InvenTel cannot plausibly allege GoDaddy acted with the requisite knowledge, as InvenTel filed its Complaint without even notifying GoDaddy of the new Website. Even considering facts outside the Complaint set forth by InvenTel, GoDaddy could not be liable. InvenTel has not presented any theory under which GoDaddy is obligated to monitor and predict which websites might be used for infringing purposes. Even when the same individual registers multiple websites, it is the intellectual property holders' responsibility to protect their property, not third parties'. Had InvenTel taken advantage of GoDaddy' s takedown request procedures, and GoDaddy refused to deregister the Website (despite evidence of infringement), InvenTel may have a claim. But here, InvenTel ran to federal court without informing GoDaddy of the infringement. Having no notice of the infringement, liability will not attach because GoDaddy did not take any action with the requisite knowledge."
A periodic reminder that even if the law doesn't require notice-and-takedown, courts are unimpressed when plaintiffs could have solved their problems by sending takedown notices.
As far as I can tell, the court doesn't distinguish between domain name registration and domain name hosting (as opposed to website hosting, which the court does distinguish). I wonder if the court would be more amenable to liability for domain name hosting. The above passage suggests it might be.
Trademark, copyright, and patent law all have discretionary fee-shifting provisions. Given the complete lack of merit in this case and the venerability of the legal principles it raised, I wonder if the court will be amenable to a fee-shift request from GoDaddy.
It has officially been over a year since the European Union's General Data Protection Regulation (GDPR) took effect and in that time, we've come to see both the benefits, and the very serious drawbacks of this program. While the implementation of GDPR has been heralded by privacy advocates as a major step forward in keeping the personal information of individuals safe, it has also had the unintended consequence of making it easier for individuals and/or entities with less than honorable intentions to effectively disappear online.
The WHOIS database, the formerly public registry of web domains that contained information about every registered domain including the registrant's name and basic contact information, has been effectively rendered useless, with the information now largely redacted by domain name registrars. Methods for requesting obtaining that information is now unique to each registrar and even then, registrars are free to decide whether or not they want to comply with that request.
And within this void of unaccountability cybercriminals are exploiting this privacy loophole, as they now may launch malicious domains with anonymity. As a result, security researchers and law enforcement are finding that it's now exponentially harder to do their jobs keeping the web safe. SeeEU Laws May be Hampering Pursuit of Terrorists.
Other groups facing similar problems with criminals utilizing this privacy loophole include consumer protection agencies, child advocacy groups, anti-human trafficking organizations, intellectual property rights holders and brand protection agencies, to name just a few.
IBM X-Force Threat Intelligence Research recently released a report which shows just how much of an impact GDPR is having when it comes to utilizing traditional WHOIS information for the tracking and blocking of malicious domains and other nefarious web based activities.
Comparing recent enforcement data to pre-ICANN policy changes enforcement data reveals stark statistics about the risks faced by consumers in a post-GDPR world. Prior to the implementation of GDPR, security researchers were able to identify and block 1.8 million newly registered malicious domains in October of 2017 alone. Fast forward to February of 2019 and that number drops to less than 160,000. That means that effective enforcement efforts utilizing WHOIS information is now just 9%, leaving a staggering 91% of potentially malicious domains and the sites to which they link up and running, and the criminals who run them, unhindered.
While there is no question there is a need to keep the private information of the public safe, it is clear that the GDPR as it applies to public safety should be re-evaluated. Cybercrime is a $600 billion a year business, and with GDPR making it easier for criminals to thrive with relative impunity, that number is only going to increase.
Completely eliminating GDPR and returning the WHOIS database originally public status is not the answer, which is why the need to establish regulated access for individuals involved in cybersecurity, law enforcement and web safety and security should be thoroughly explored.
For those who need to access the WHOIS database, AppDetex offers their WHOIS Requestor System (AWRS). Developed for and used by customers to obtain non-public WHOIS data, AppDetex sends customer verified, non-public WHOIS data requests for cybersecurity, consumer protection and IP enforcement activities to ICANN-accredited registrars.
Until a satisfactory compromise can be achieved, online crime will continue to thrive, making it obvious that the price of online privacy is apparently safety and security.
With 2019 coming to a close, we're not just saying goodbye to the past 365 days, we're also saying goodbye to an entire decade. As we bid farewell to the 2010s, we're taking this opportunity to look back and reflect on the digital decade as well as consider what the future might have in store for us all.
The past ten years were a whirlwind of change, with new advances in technology exploding onto the market at a faster pace than ever before. We witnessed an incredible amount of change in how we not only use the internet but in how we interact with the digital world. The internet, once accessible only via a browser and computer, has become more portable than ever. With mobile and tablets, and even our watches, we're able to access the internet at any time. Not to mention the wealth of connections via the cloud — voice, home and factory floor automation, transportation — the opportunities are endless.
How we access the internet isn't the only big change. From personal emails to social, chat and messaging, to using hardware and traditional hosting to digital storage within the cloud, we also witnessed a massive expansion of the domain name space. And it doesn't stop here. We now rely on apps to help us complete everyday tasks from connecting with our friends to shopping to conducting financial transactions to staying abreast of world events with news channels that are updated each second of every day. In addition, we're changing how we access the internet, with Artificial Intelligence (AI) built to recognize our voices and Augmented Reality (AR), helping to blur the lines between what's real and what's digital.
In this post, we'll take a look back at some of the major changes to arrive on the scene in the past ten years and follow with a second blog in which we look ahead to the next ten years.
So grab a glass of bubbly, put on your most festive NYE hat and let's toast five major milestones we witnessed this past decade.
1. The Rise of Mobile
In the early 2010s, mobile and tablets were just starting to take off. Consumers still largely relied on computers for internet usage. Mobile and tablets accounted for just a scant single digit of overall internet usage, and when you think back to where the technology for these devices were at that time, it's no surprise. The iPhone 3G only had a single rear-facing camera, making it exponentially more difficult to take a good #selfie. And forget about posting it to Instagram. That app didn't officially launch until October of 2010. Speaking of launches, the iPad is also still a relatively new player on the field, with its release in April of that year as well.
Mobile devices and apps, although still in their infancy at this point in the decade, resonated with consumers, who quickly embraced the technology. It was this wide acceptance that allowed mobile devices to surge in popularity, and in just a mere six years, were being used more often than computers to access our favorite internet sites and apps. With all those eyeballs locked onto mobile screens, and with over 5 billion global users of mobile devices, it's no wonder mobile ad spend exceeded television ad spend in 2018.
Speaking of eyeballs locked onto screens, at the same time mobile usage was rising, unfortunately so were the number of accidents, ranging from distractedly bumping into one another on sidewalks all the way to the implementation of legislation aimed at curbing distracted driving following a number of accidents on highways attributed to mobile use.
2. Apps Lead the Way
What would a mobile phone be without apps? When it comes to connecting consumers with brands, few options are available to brands that are as popular right now as the mobile app, and few mobile app platforms as popular as the Apple App Store and Google Play Marketplaces. In the short ten years since launch, these two platforms managed to drive nearly three billion app downloads and hundreds of millions of dollars of revenue. By the end of 2018, there were nearly 6 million apps available in the two stores, with over 194 billion apps downloaded. Even with conservative linear growth, it's estimated that the revenue generated by mobile apps will hit $210 billion dollars at the end of this year.
Of course, it wasn't always this way. Originally launched in October of 2008 and called simply the "Android Market," Google Play had humble beginnings. Early apps were capped at a maximum size of 25 megabytes, and by March of 2009, there were 2,300 apps available for download. Rebranded as "Google Play" in 2010, the platform has grown to encompass games, music, books, movies, television stores, news publications and magazines, devices and Play Pass, a subscription service that allows users to access games and apps without ads and in-app purchases. As of September of 2019, the Google Play store now hosts over 2,800,000 apps ranging in size from 10MB all the way up to 4GB, with thousands more added every day.
Equally impressive is the growth experienced by the Apple App Store. As the second-largest mobile app distribution platform in the world, the Apple App Store currently houses just under 1.8 million apps, a huge leap from the original 500 the platform hosted in early 2008. The Apple App platform generated over $22.6 billion in 2018. And with over 20 million currently registered iOS developers constantly working on new and exciting apps for the over 500 million weekly visitors, those numbers are only expected to grow.
3. Fun and Games – Literally
Speaking of apps, there are few that are as insanely popular as those used for entertainment. Social, messaging, media and games drive the majority of all app downloads and revenue. In fact, the top four most downloaded apps of the last decade are Facebook, Messenger, WhatsApp and Instagram.
When it comes to generating revenue, it's the entertainment apps pulling in the biggest numbers with games, video streaming and music apps consistently ruling the top three slots.
Mobile devices and apps allowed people to connect with others in ways we never imagined before. Social media usage was tracked at just 5% back in 2005, meaning the early chat rooms were pretty quiet. But now, with usage at over 75% and growing every day, social media sites have made reaching out to friends and family across the globe as easy as pushing an upvote button.
While there is no question Facebook is king when it comes to connecting people, it's Twitter that has actually managed to change the course of history, helping to rally support for causes like Arab Spring to the current protests in Hong Kong. Hashtags like #metoo and #blacklivesmatter helped shed light on important issues in the United States and there's no question that social media influence, both positive and negative, was felt in multiple elections across the globe, including the 2016 US presidential race.
Speaking of staying connected, we can't talk about social media or apps without touching on the emergence of wearable technology. From watches that monitor health to rings that vibrate for incoming phone calls to glasses that project heads up displays, we're just an arc reactor away from real-life Iron Man technology. Already robust with over 526 million wearable devices currently in use, it's estimated that it will continue to grow to over 1.1 billion in 2022 as telecom technology switches from 4G to 5G.
4. Making it Rain With Cloud Computing
With all the innovations, one would expect the future of the internet to be as bright as a sunny day but believe it or not, the internet would not be flourishing if it weren't for the cloud. Cloud computing services include infrastructure, computing power, servers, storage, databases, networking, software, analytics, and even entire business applications. Individuals and organizations lease cloud services from hosts. This allows them to run their web and mobile apps, store user, organization or application data in a secure location accessible via colocated cloud computing services, including cloud desktop storage, a cloud storage gateway, or Web-based content management systems.
Originally invented in the 1960s by Joseph Carl Robnett Licklider, cloud storage has truly taken off in the past decade with services like Amazon's Web Services, Microsoft (including Azure, Office 365, Dynamics 365 and several other segments of the Productivity and Business Process Division), SmugMug, Dropbox, Google Drive, and even Pinterest, to name a few.
In addition to freeing companies from buying and managing software, physical computers and infrastructure, cloud storage is economical, with users only paying for the storage they use. Businesses that utilize cloud-based services typically cut their energy consumption by up to 70%, a valuable statistic with today's increasingly green initiatives.
Of course, while the cloud might seem ethereal, there's no question the profits made by companies who provide these services are as solid as a rock, considering the three primary players in the space, Amazon's AWS, Microsoft's Azure and the Google Cloud. Combined, these companies helped drive 2019's overall cloud infrastructure market growth of 42%; reported sales this year include Amazon AWS $7.7 billion Azure $9.6 billion and Google Cloud $5.45 billion.
5. What's in a name?
Essentially nothing more than an identification string that serves as an address for a website, domain names are a critical part of the digital experience, and as such, have found themselves at the heart of numerous discussions and debates this past decade. In 2010, there were almost 200 million domain names registered, a number that has grown to over 354.7 million by the second quarter of 2019.
Originally TLD extensions were limited to seven options; .com, .org, .net, .int, .edu, .gov, and .mil. All that changed in 2011 when ICANN's board voted to end most restrictions on the creation of generic top-level domain names (gTLDs.) This decision led to an explosion of TLDs, covering everything from specific branded company names (.ford, .apple, etc.) to generic descriptions of services (.banking, .karate, etc.). There are currently over 1,000 ICANN approved generic TLDs with more being added.
While the release of these gTLDs may free up valuable space online for new domain registration, it's also enabled bad actors to bulk register hundreds and sometimes thousands of domains with the intent to use them for phishing, fraud, and other nefarious reasons. In addition, with the implementation of the EU's GDPR in May of 2018 and the ensuing redaction of all formerly public WHOIS information for registrants, it's now much easier for these bad actors to hide.
Prior to GDPR, gaining access to registrant identifying information through WHOIS databases enabled rapid responses to these sorts of attacks. Now, in a post-GDPR world, access to domain name registrant contact information is severely limited and hidden behind registries and registrars who are reluctant to respond to WHOIS requests for fear of incurring massive fines.
While these changes to how private information is stored and shared is good news for any individual who wants to remain anonymous online, it's bad news for a whole host of individuals, including security researchers, journalists, brand protection specialists and law enforcement agencies who use the WHOIS information as a way of identifying and tracking bad actors across the internet.
While there is no question there is a need to keep the private information of the public safe, it is clear that the GDPR, as it applies to public safety, needs to be re-evaluated. Cybercrime is a $600 billion a year business, and with ICANN'S policies and the overly conservative response from registries making it easier for criminals to thrive with relative impunity, that number is only going to increase.
Completely eliminating GDPR and returning the WHOIS database to its originally public status is not the answer, which is why the need to establish regulated access for individuals involved in cybersecurity, law enforcement, and web safety and security should be thoroughly explored. Although we are closing out this decade without a truly satisfactory resolution to this situation, we predict the issue will continue to be a topic of hot debate as both sides work towards a solution.
What's next in the 2020s?
With these advances behind us, it is daunting to try to predict what's coming in the decade ahead. What should we expect? What should companies and businesses anticipate when it comes to managing their brands online and how they approach their brand security strategies?
It's safe to say that innovation is far from over and that we'll continue to see new and exciting technologies take hold. We predict AR will continue to advance and for AI to become even more refined. We expect interfaces like Alexa, Siri and Cortana to become even more life-like with AI, Machine Learning and Natural Language Processing continuing to evolve. We're also expecting wearable technology innovation to continue, and with technology, manufacturing and medicine working so closely together, we'll see more integration between humans and machines, which will likely achieve both greater worker productivity and better healthcare.
While some of these technologies may be in the development phase now, we expect refinement and advancements to continue, making them ubiquitous. The question then becomes, how will these advances impact security and intellectual property protection teams? While we can't say exactly how, we can safely assume the changes will be profound.
What is evident about the past decade is that these trends, while making commerce, entertainment and communication more convenient, have dramatically increased the amount of digital risk faced by consumers and the brands they trust. These risks and the threats they pose to customer relationships, revenue and reputation for brands have dictated new and more aggressive defensive posture in digital channels.
Stick around for part two of our blog series "Digital Decade in Review — A Look Forward" where we'll be breaking out our Magic 8-Ball (the digital version, of course) and pulling together what we think the next decade will bring both in technological developments and their impact on brand protection and security.
Written by Frederick Felman, Chief Marketing Officer at AppDetex
There is a difference, of course, between asserting a claim that cannot possibly succeed in an administrative proceeding under the Uniform Domain Name Dispute Resolution Policy (UDRP) and being unprepared to prove a claim that may have merit with the right evidence. Still, there is also an overlapping similarity in that complainants are either shockingly unfamiliar with UDRP procedures and jurisprudence (and should have retained counsel who are) or know they have no actionable claim but plow ahead anyway perhaps in the hope their abuse of the proceedings will not be noticed.
The Panels' exasperation is evident, to take a couple of examples, in Adventure SAS v. Mike Robinson, BlackHawk Paramotors USA Inc., D2019-2489 (WIPO December 12, 2019) involving a dispute over a soured distributorship in which it noted that "those responsible for the drafting of the Complaint [Complainant was represented by counsel] and/or the person who authorized the filing of the Complaint knew that the Complaint should not succeed because on any fair reading of the available facts the Respondent registered the Domain Name in good faith." In Nalli Chinnasami Chetty v. Anthony Nalli, FourPoints Multimedia Corp., D2019-2642 (WIPO December 18, 2019) (<nelli.com>) the "Complainant and its counsel also provided no evidentiary support whatsoever to support their argument that the Respondent must have registered and used the disputed domain name in bad faith… ]T]hey [also] completely ignored the requirements set out in the Policy for establishing bad faith registration and use of a domain name [and] disregarded precedent and unfavorable facts in concluding that the webpage (which makes no connection whatsoever to the Complainant) was an attempt to misappropriate its trademark reputation, and offered no more than unsupported allegations."
The consensus as to what is expected of complainants (implicit in Adventure SAS and Nelli has been expressed succinctly in a number of memorable decisions: "[Complainant] should at least be minimally versed" and "[i]t is no excuse that a party or its representative is unfamiliar with clear Policy precedent, much less the clear language of the Policy and the Rules themselves" [D2012-1555], or "the deficiencies [of proof] must have been obvious to anyone remotely familiar with the Policy" [D2016-0126], or "[i]t is no excuse that Complainant may not be familiar with clear Policy precedent, the Policy, or the Rules" [D2012-2455].
The gamut of mark owners included in these rebukes ranges from the confused (mistaking the UDRP as a trademark court), through the clueless (who ought to have known otherwise) through the hapless (insofar as marshaling proof) to the manipulative (alleging bad faith when the facts contradict the allegations). The "confused" can be forgiven since while domain names are central to their complaint, the claim actually sounds in trademark infringement. The "clueless" are generally not forgiven; they should have known better.
In Altiplano Voyage, the Panel notes that "this case presents a more nuanced trademark dispute (at least on the papers presented) than that for which the Policy is equipped." The Panel also comments on Complainant's evidentiary deficiencies. In Taffo, the Panel "incidentally notes that the Policy is designed to resolve standard cases of abusive domain name registrations, while the present one is a complex trademark matter that will be more appropriately handled by the Court of Rome before which an Ordinary Proceeding is already pending." If there is a remedy at all it must be for trademark infringement.
The "clueless" are mark owners whose rights postdate the registration of challenged domain names. They have no claim for cybersquatting under the UDRP (even though they have standing to maintain the proceeding), and no statutory claim or even standing under the Anticybersquatting Consumer Protection Act (ACPA). The possibility that this kind of "clueless" is not clueless at all but deliberately abusive of the administrative proceeding cannot be discounted and is commonly dealt with as such.
A number of recent UDRP decisions illuminate how mark owners attempt to justify their complaints. There is no remedy because owing to the timing of mark owners' right, there can be no actionable violation. In Advice Group S.P.A. v. Privacy Administrator, Anonymize, Inc. / Michele Dinoia, Macrosten LTD, D2019-2441 (WIPO December 2, 2019) (<advicegroup.com>) the Panel explains that the "Respondent acquired the Domain Name some two years before the Complainant obtained a trademark, and nine months before the Complainant even filed its trademark application.") In another case, Gary Chupik v. Shant Sarkuni, FA1910001868583 (Forum November 18, 2019) (<elitemindset.com>) Complainant tried another, equally unpersuasive maneuver by applying for a trademark, strangely believing (one supposes) that having a trademark is conclusive of a right to the corresponding domain name:
a. the disputed domain name was registered on April 18, 2017;
b. Complainant made unsolicited offers to purchase the disputed domain name which were rejected by Respondent;
c. Complainant filed his trademark applications with the USPTO on January 30, 2019, after his offers to purchase the disputed domain name were rejected;
d. the filing of the Complaint on or about October 28, 2019.
Complainant appears to have believed that because "he made an offer to purchase the disputed domain name from Respondent" and because the "Respondent refused to sell the disputed domain name to him for the highest price that he was willing to offer" that he had asserted a claim for cybersquatting. The Panel found the conduct sanctionable: "In the circumstances, this Panel finds that Complainant, being aware that he was not entitled to succeed, nonetheless brought this Complaint with the hope that he may secure the transfer of the disputed domain name, after he had failed to purchase same in open commercial arms-length negotiations."
Other recent Complainants represented by counsel (who ought to have known better!): Pet Life LLC v. ROBERT RIESS / blue streak marketing llc, FA181000181087 (Forum November 11, 2019) (<petlife.com> registered more than 5 years after registration of domain name) and Glovoapp23, S.L. v. Wang Shun, D2019-1986 (WIPO September 30, 2019) (<glovo.com> registered 17 years before trademark right). Complainants in both these cases were sanctioned for asserting claims they could not possibly prove.
The second group, those I call hapless, may have meritorious claims, but either lack proof or have not marshaled what they need to establish bad faith. The inference with deficiency of evidence is that if no proof is offered, no proof exists. The point is illustrated in Assurity Life Insurance Company v. DOMAIN MAY BE FOR SALE, CHECK AFTERNIC.COM Domain Admin / Whois Foundation, FA1911001872882 (Forum December 21, 2019) (<assuritylife.com>): "Complainant asserts both registered and common law trademark rights. Complainant owns several USPTO registrations but none earlier in time than ... March 21, 2006 and so even its filing date postdates the registration of the domain name [May 2004]." However, in this case.
Complainant's assertion of common law rights is premised, not on proof of public awareness and reputation, but on the above statement, which in turn rests on the claim of first use in commerce date of June 12, 1996, shown in Reg. No. 3,070,343. That date is provided to the USPTO by a trademark applicant. There is no proof of a common law trademark by May 2004 when the domain name was registered.
One of the questions here which the Panel frames as a preclusion issue can also be thought of as a credibility issue: if Complainant really believed it had a claim, why did it wait so long to assert it. The Panel noted that "although opinions have differed as to whether the equitable doctrine of laches applies to UPRP proceedings, it has been recognized [that is, the consensus among panelists is] that delay in bringing proceedings is likely to place a higher burden on a complainant attempting to prove a state of affairs long ago." The same point is also made in NovAtel Inc. v. Registration Private, Domains By Proxy, LLC / Domain Admin, FindYourDomain.com, D2019-1939 (WIPO October 4, 2019) (<novatela.com> and DK Company Vejle A/S v. Cody Favre, C4 Squared, D2019-2676 (WIPO December 17, 2019) (<shopcasualfriday.com>). In NovAtel, "[w]hile the Complainant asserts that its NOVATEL trademark has been in use since 1992, it provides no evidence as to how widely the mark was known at that time or, more importantly, in 2007." In DK Company Veile, the Panel explains that
[t]he difficulty with that case is that the Complainant has provided no information as to the size or reputation of its business, and such limited evidence as it does provide indicates its business is entirely European. The Complainant says that its CASUAL FRIDAY trademark is "widely known" in the European Union but provides no evidence to substantiate that assertion. There is nothing before the Panel to suggest that a United States retailer would have had any knowledge of the Complainant or, had it carried out searches, would have found any reason to conclude it could not adopt the words as part of a name for use in the United States.
In failing to produce supporting evidence necessary to establish common law rights or the repute of a mark at the time of the registration of the domain name or any other indicia of consumer recognition, the inference must be that it has none. The consensus is as the Panel states in Adventure SAS, namely that "the natural inference in the absence of any evidence to the contrary would be that the Respondent registered the Domain Name in good faith."
The third group, less hapless I think because they simply do not have enough information until the response is filed, are mark owners complaining of domain names held or used by commercial businesses offering bona fide goods or services (distinguishing these respondents from investors reselling domain names). Two recent examples are Lexon v. Registration Private, Domains By Proxy, LLC / Surety Management, D2019-2365 (WIPO December 12, 2019) (<lexon.com>) in which Respondent offered proof that "the Disputed Domain Name was acquired as part of a legitimate business transaction when the Respondent purchased the Lexon Surety company and its trademarks"; and DSN Software, Inc. v. Rob Bay, FA1910001865961 (Forum December 10, 2019) (<practicesnapshot.com>) in which the descriptive phrase identified the services Respondent offered.
Included in the hapless group are complainants whose marks are composed of common words, descriptive or common phrases, and arbitrary letters (to distinguish marks composed of coined words or nationally or international famous). These strings are notoriously hard to prove domain names were not registered for their semantic (rather than trademark) values. In Service Spring Corp. v. hao wang, D2018-2422 (WIPO December 17, 2019) (<ssc.com>) the "Complainant submits that bad faith should be inferred from (i) the Respondent's use of a privacy service, (ii) the Respondent's provision of incomplete address details in the WhoIs record and (iii) the Respondent's failure to respond to the Complainant's cease and desist letters," but these factors, even if considered, are not conclusive of liability; they do not add up to bad faith.
One final note, a reminder, that these cases rejecting claims of cybersquatting represent less than 10% of all disputes that go to award. To the extent that strings of words are common in the language community, unsurprising and common expressions, descriptive phrases, not clearly associated with any one commercial user there is a corresponding increase in the evidentiary demand for proof of bad faith registration; bad faith use alone does not prove cybersquatting under the UDRP if there is no proof of bad faith registration.
Written by Gerald M. Levine, Intellectual Property, Arbitrator/Mediator at Levine Samuel LLP
Article 6-12: Personal Rights. Co-authored by Klaus Stoll and Prof Sam Lanfranco.[1]
Internet Governance like all governance needs to be founded in guiding principles from which all policy making is derived. There are no better fundamental principles to guide our policy making than the Universal Declaration of Human Rights, (UDHR). This article is Part 3 of a series exploring the UDHR as a guide and template for the digital governance and digital citizenship. [2] We discuss UDHR Articles 6 through 12 and address topics such as fundamental digital values, cyberlaw, policymaking and the role of tribunals in digital governance. [3]
Articles 6 and 7 are closely connected, and we discuss them together.
Article 6: Everyone has the right to recognition everywhere as a person before the law.
Article 7: All are equal before the law and are entitled without any discrimination to equal protection of the law. All are entitled to equal protection against any discrimination in violation of this Declaration and against any incitement to such discrimination.
Article 6 lays the foundation for personal rights as "recognition as a person before the law." It "recognizes the existence of the individual as a human being with distinct needs, interests, and opinions." [4] This is "a prerequisite to all other rights of the individual." [5]
Everyone, Everywhere, Equally
"Everyone, everywhere, equally" is referred to in the following as the 3e's. Articles 5 and 6stress the universality of personal rights andmake no distinctions based on race, religion, culture, or gender orientation. Personhood is a broad concept under the UDHR, and the pressing task now is to formally recognize digital personhood [6], personal data and "constructed personas" formally under its protection.
Universality and inclusivity without geographical limitations are fundamental characteristics of cyberspace and the loadbearing pillars of digital citizenship. [7] The 3e's are "separate but inseparable" as foundational principles of the UDHR and should formally underpin digital citizenship. No citizen should be denied access and protection of the law or be forced to give up these fundamental rights regarding their digital data and digital personas in the Internet ecosystem. [8]
"The concept of "sharing" is the DNA of the whole Internet.
Cyberspace has over time developed from being the way to access computers, to share files and become the space to assemble, to express, share and promote ideas and to defend and pursue them as an individual or group without the limitations of space and to a certain extent the limitations of language and culture.
It leads to a "win-win-situation" and does not know losers. If the concept of sharing is ignored or substituted by a 20th century "zero-sum game" with winners and losers, the risk is high, that in an interconnected world at the end of the day everybody is a looser. This is a fundamental lesson from the 50 years of Internet history, which should not be forgotten." [9]
DNS Values
The 3e's also apply, in a technical sense, in the operation of the Internet's Domain Name System, (DNS). [10] When the DNS resolves digital address queries, it makes no distinctions and serves everyone, everywhere, equally. The DNS reflects and upholds the most important principle when it comes to the application of law: Do not discriminate. [11] The DNS is more than a technical innovation, it's operation inherently embodies respect for rights in cyberspace and exhibits integrity in human communication within a trusted system. Attempts to weaken the universality of DNS through alternative roots, national segments and closed spaces diminish our rights as persons and reduces the role of the Internet ecosystem as a venue for building our joint digital and human humanity.
Recognition
Recognition before the law means not only recognition of a human being as a person and a citizen, but also the recognition of the specific circumstances in which one resides, here as digital personas and digital citizens. Laws, regulations and behavior concerning digital existence must respect the (global) borderless residence of our digital beings and treat our digital data and personas as part of our being.
Cyber Law is any law that is applicable in cyberspace. Treatment under the law draws on both codified law and case law. Jurisprudence under digital governance is in its infancy and will further develop over time. So will case law and legal processes regarding issues such as intellectual property, trademark infringement, domain name disputes, cybersquatting, and e-commerce practices. The development laws on cybercrimes such as, hacking, identity theft and cyberbullying, malware, spyware, phishing and pharming, and the development of policies for acceptable practices and behavior in the digital political, economic and cultural spheres are in their infancy. [13] It is essential that work in such areas have a firm understanding and grounding in the principles embedded in the UDHR as they apply to persons, personal data, and personas in the cyberspaces of the Internet ecosystem. Laws and regulations must be careful to address the problems as narrowly as possible and avoid unintended consequences on the free flow of information on the Internet.
At present, a general consensus exists that present international laws and treaties do apply in cyberspace. The discussion focuses more on "how they apply" rather than "if they apply." [14] Some governments consider existing treaties and national laws as adequate. Others see the need to create new laws specific to cyberspace. [15] Given the global nature of the Internet ecosystem, National and regional cyberspace law will often have a relevance that goes beyond the limits of territory based sovereignty. [16] While states may agree that cyber law is always subject to and guided by national laws and international agreements, we are at the early stages in the design of national cyber laws, and for some degree of global harmony across digital policies. The global nature of the Internet ecosystem will likely involve intensive discussion around international agreements regarding digital rights and the meaning of global digital citizenship. There is also a risk in the global online environment that the first country to regulate can, by virtue of being the "first comer," essentially impose legal rules and potential liabilities on the rest of the world. [17]
Digital Governance and Policy Making
With the explosive growth of the Internet, we observe growing efforts by some nation-states to question the applicability of existing international law in cyberspace. In 2004, the United Nations mandated the "UN Group of Governmental Experts on Developments in the Field of Information and Telecommunications in the Context of International Security", (GGE). [18] The mandate of the GGE was "to consider existing and potential threats in the sphere of information security and possible cooperative measures to address them," Six GGEs failed to produce a consensus report in 2017 and managed only to outline, but not establish, a global digital agenda and the general principle that international law applies in cyberspace.
Fundamental differences in how States see the role of law in cyberspace became obvious in 2018 when the UN adopted two new resolutions that mandated another GGE as a continuation the other previous GGE and whose mandate included to take into account the assessments and recommendations of the previous group ." [19] and in parallel an "Open-Ended Working Group on Developments in the Field of ICTs in the Context of International Security" (OEWG). There were overlapping and sometimes contradictory agendas, working methods and remits. [20]
The two UN groups are a manifestation of the big conceptual difference between member states when it comes to cyber law and digital governance. One group of states prioritizes their own sovereignty and protection against perceived cyberthreats and undesired outside influences into their internal affairs. The other group of states prioritizes personal information security over cybersecurity and puts the integrity of the Internet ecosystem and how digital information is processed on the top of its agenda. These conflicting tendencies make it very difficult and unlikely to form a consensus on international or global policies and practices.
Another process was recently created within the UN framework during the 74th UN General Assembly. A resolution, initiated by Russia was adopted to establish an open-ended ad hoc intergovernmental committee of experts to "to elaborate a comprehensive international convention on countering the use of information and communications technologies for criminal purposes." [21] Although their purported intent was to address cybersecurity, this has to be seen as another attempt to highjack larger internet governance processes under the guise of security.
Wolfgang Kleinwaechter summarized the efforts around digital policy making so far: "In the 2000s, there was a more or less ideological battle between "isms" — multistakeholderism vs. multilateralism — which produced more controversy than progress. In the 2010s, it was widely recognized that both concepts could co-exist… But as the UN Panel has outlined, for the 2020s, this will not be enough. The next generation of Internet Governance will need much more inclusive processes where multilateralism and multistakeholderism have to be treated as two sides of one coin." [22]
Governing the Killer Robots
Cyberspace is viewed as another theatre of war. Governments have quickly identified the potential of digital technologies for militaristic use in defense of their countries. A group of governmental experts is negotiating since 2014, "Lethal Autonomous Weapon Systems", (LAWS), under the Convention on Certain Conventional Weapons, (CCW). [23] Their recommendations have potentially a great influence on policymaking for cyberspace. The assessment of the legality of AI-based killer robots differs widely between states, some want to ban them like chemical weapons, other support the use of these murderous digital innovations, but none are underestimating the impact of digital technologies on warfare. Given its significance and the explosive expansion of military drone use, it is surprising how little the discussion about killer robots is taking place in the general global debate.
Digital policy making Mechanisms and Tribunals
There exists no specific judiciary body for cyberspace. States are trying to fill the void and extend their sovereignty into cyberspace by making the activities of their citizens in cyberspace subject to nation-based laws, some with elements of extraterritoriality. These efforts can only result in inadequate applications of law and expressions of justice. They do not consider the special characteristics of cyberspace and transpose territorially based concepts of law and justice into the digital realm of borderless, universal and inclusive activities and behavior. There is an urgent need to define cyber law, establish mechanisms of enforcement, and create dispute resolution tribunals, all developed through legitimate policy-making processes. To establish legitimate cyberlaws and create competent tribunals, digital citizens must be empowered and engaged in policy-making processes.
Digital Policy Making Mechanisms
Over the years there have been a growing, and increasingly confusing, number of initiatives to establish policy making mechanisms for cyberspace. [24] Some are UN based. [25] Some are based on national efforts. [26] Some have been initiated by other stakeholders. [27] The number of initiatives is confusing and seems to be only limited by the number of special interests represented in digital governance. Their common characteristic is that they are created as instruments to ensure that one group's specific interests prevail over those or another: profit over privacy; national interests over global brotherhood; short term political gains over long term common good, the list is as long as there are special interests seeking protection.
Other characteristic these initiatives share is their claim to be inclusive and open to all stakeholders while claiming that the common good is upmost in their minds. This is strongly reminiscent of the proclamation of the pigs who control the government in George Orwell's novel Animal Farm: "All animals are equal, but some animals are more equal than others!." [28] In doing so this exposed the hypocrisy of self-appointed and self-empowered policy-making bodies, bodies that proclaim the absolute equality of all digital citizens but in practice preserve power and maintain the privileges of a small elite.
Enabling Recognition
Some of the reasons why effective digital governance has so far not be established are contained in the wording of Article 6. Recognition before the law now requires the equal recognition of everyone's rights, everywhere, in the context of one's life across inseparable literal and digital realms. [29]
What is need are digital governance models that recognize and enable the equal participation of everyone from everywhere in open policy-making processes. The multistakeholder model comes nearest to that ideal, but it can be corrupted through artificial access barriers such as funding. How can civil society stakeholders be effective and independent when they are often the only ones that act as volunteers and depend on the funding by other stakeholders to enable their participation? [30] How can trade associations be the representatives of all their members when their policymaking is dictated by the interests of their top corporate members? To explore and eliminate the influences of institutional corruption "...we needed to think about the ways in which systems of incentives, or economies of influence, might advance or deter a collective objective." [31] The function of digital governance is not just policy-making, but also to create the conditions for it through unprecedented efforts of awareness, engagement and capacity building with the goal of establishing the 3e's (everyone, everywhere, equally). Like any good judge, the engagement processes must be neutral and even-handed towards all stakeholders.
Without Discrimination
Article 7 puts a lot of emphasis on protecting citizens against discrimination that stems from violations of their basic human rights. It goes even further by condemning not only active discrimination but "incitement to such discrimination." Many exploitative digital business models discriminate and, as discussed in Part 2 of this series, reducing one's digital engagement to little more than digital slavery. They enable and promote the exploitation of personal data, including surveillance, data mining and constructed digital personas, and contribute to the end result of discrimination.
As long is cyberspace is without just and effective governance mechanisms, it falls to the state is to protect the rights of its citizens, including their rights as digital citizens. [32] That does not give the states the right to assume that they can discriminate against the rights of digital citizens by passing laws that are at odds with the notions of rights as found in the UDHR, nor to pursue special interest policies at the cost of the public good. The global nature of cyberspace poses certain limits on the effective actions of states on the rights and obligations of digital citizens who are digitally or literally residents within the boundaries of the state and digitally residents globally at the same time.
Competent Tribunals
Article 8: Everyone has the right to an effective remedy by the competent national tribunals for acts violating the fundamental rights granted him by the constitution or by law.
Article 8 presumes the existence of competent national tribunals and citizens' rights based in law. So far, no such competent tribunals exist for cyberspace, either nationally or for the global Internet ecosystem. Cyberspace lacks appropriate tribunals where a digital citizen can pursue an effective remedy to digital rights issues. There are attempts to establish cyber tribunals, including tribunals operating at the global level, but they tend to focus on issues of special, often commercial, interests such as intellectual property and domain name disputes. [33] They are lacking in due process mechanisms for end users and struggle to reach the levels of legitimacy necessary for defending general principles necessary to serve a multistakeholder constituency base. [34]
Article 8 also states that tribunals must be competent. According to Article 14 of the "International Covenant on Civil and Political Rights," (CCPR), for a tribunal to be competent requires independence and impartiality. Independence means a clear separation of the powers of state, expertise of actual judicial officers and the independence of the tribunal members from third-party support such as funding. [35] The emergence of pseudo-tribunals, expedited "takedown" courts and other fast track processes can often trample on user rights and diminish the role of traditional judicial tribunals.
None of the past, current or proposed digital governance mechanisms and related tribunals qualifies as "competent". [36] They are not clearly separated from the institutions they "judge" over, and/or are in their maintenance dependent on support from the same institutions. [37]
In the context of the Internet ecosystem and cyberspace, multi-stakeholderism is an important principle that should be deployed as a basic principle of digital governance. As it is currently implemented, it is neither independent nor impartial.
Under the guidance of the UNDHR, nation-states are currently the first line of defense against what should be codified as violations of the fundamental rights of digital citizens. In the absence of broad policy engagement, these remedies will always be partial and imperfect. Digital governance's urgent task is the development of competent and impartial policy processes and competent cyber laws and tribunals.
Arbitrary Means What in Cyberspace
Article 9: No one shall be subjected to arbitrary arrest, detention, or exile.
Article 9 indirectly confirms the right of a competent tribunal, (including future competent tribunals under cyber law), to order arrest, detention or exile, if it is not arbitrary. A charge by a competent tribunal may be arbitrary, or not. An arbitrary charge or punishment may appear random, but there is always a "private agenda" underlying cause or reason. For example, a locus of power (state, president, party, dictator), may use instruments at its disposal, (including the police, judiciary, or now, communications authority), to intimidate its citizens and embed fear through digital exile and random arrests and punishment.
As aggrieved persons such as ethnic and gender community groups resort to digital means to express their concerns, expose problems, and mobilize for action, a more targeted but still arbitrary two-level prosecution is increasingly likely to follow. Individuals and whole communities can experience "digital exile" in cyberspace by the growing practice of authorities suspending access to both the Internet access and cell phone service. Detention and arbitrary arrest can follow based on activities labeled unwelcomed in cyberspace by non-accountable governments.
In cyberspace, in the absence of policies and regulations, corporate power (social media, search engines, apps) has a wide scope to introduce terms of use rules that must be followed by its subscribers and users. At issue is what justifies, or challenges, the legitimacy of the restrictions and data use policies embedded in the user rules. Rules can be justified when they fairly benefit all digital users and are built on multistakeholder engagement and agreement. If the motive is to enhance corporate power, and users don't agree with the rules, there is little they can do. Disagreeing can result in suspension, exile and "digital death" on the platform. Digital providers, or any other digital stakeholder, should not be able to arbitrarily introduce rules, or demand the use of unjustified standards and norms, that harm the rights of users by exclusion or by threatening sanctions.
Interference into the rights of digital citizenship is justified, and not arbitrary when digital technologies are used to harm others. Digital technologies are used for hacking, identity theft, cyberbullying, phishing and pharming (using age-old criminal techniques like blackmail and extortion). [38] Digital technologies whose use is justified under normal circumstances, such as encryption and Virtual Private Networks (VPN's), can also be used to avoid detection and punishment in the exercise of cybercrime.
Arbitrary acts and overly broad laws can impose the blunt force remedy of removing individual and entire household access to Internet services, which results in a form of digital exile. Terminating users' internet access, along with content filtering and "stay down" regimes without due process protections for citizens, results in unjust digital exile.
Avoiding arbitrary acts in cyberlaw will require intense dialogue and consultation in competent digital policy making processes.
Tribunal Characteristics
Article 10: Everyone is entitled in full equality to a fair and public hearing by an independent and impartial tribunal, in the determination of his rights and obligations and of any criminal charge against him.
Article 10 describes the characteristics of the tribunal a citizen is entitled to: equality, fairness, independent, impartial. The role of the tribunal is restricted to determine rights and obligations if any criminal act has been committed.
There is an inherent conflict between national interests based on sovereignty over physical territories and the digital citizen's presence in the virtual and borderless cyberspace. Attempts by nation-states to assert their territory-based sovereignty into cyberspace and over one's digital citizenship violate Article 10, especially if such attempts take place outside the normal judicial systems. For example, administrative agencies who grant their employees judicial-like powers to exercise broad take-down and injunctive powers over Internet users, violate user rights to full fair and equality treatment by an independent and impartial tribunal. [39]
Presumed Innocence but Assumed Guilt
Article 11: (1) Everyone charged with a penal offence has the right to be presumed innocent until proved guilty according to law in a public trial at which he has had all the guarantees necessary for his defense.
Article 11: (2) No one shall be held guilty of any penal offence on account of any act or omission which did not constitute a penal offence, under national or international law, at the time when it was committed. Nor shall a heavier penalty be imposed than the one that was applicable at the time the penal offence was committed.
As discussed above, some states are trying to mitigate the current lack of cyberlaw with their own legislation. [40] In doing so, it is important that they afford citizens the right to due process and the right to be presumed innocent until the high bar of being proven guilty is established. States have the power to regulate access to cyberspace, digital content, applications, and activities through legislation and/or prescribed technical barriers. Suspensions of Internet and cell phone access have become increasingly common, as well as intensive surveillance by the state, and commercial interests.
There is a pressing need for multistakeholder dialogue in the formulation of policies, and for oversight in policy implementation. States must take care of how they approach cybercrime and security, and the presumption of misuse just because the possibility of misuse exists. All legislation needs to be evidence-based, rights sensitive and motivated by public interest and the common good. It cannot be based on bolstering strategic economic and political power self-interests at the expense of digital rights.
It cannot be stressed enough that there is a need for an engaged discussion between all digital stakeholders ABOUT what principles need to be honored (with regard to the UDHR) and what proper policies need to be enacted for dealing with the issues surrounding digital technologies. Data collection, storage, and surveillance, and AI digital persona assembly open countless issues about digital rights and digital obligations.
When the drafters of the UDHR penned Article 11, the memories were still fresh of a Nazi Germany. In Nazi Germany, millions were charged and assumed guilty for racial, political or economic reasons, based on a shaky foundation of unjust laws. Processes and judgments took place behind closed doors and without due process [41] The drafters had lengthy discussions on the second paragraph of Article 11. [42] A ban on retroactive law was part of many constitutions at the time and the pressing question was if that meant that the recent Nuremberg trials of Nazi criminals where illegal. [43] At the time of drafting it was not generally accepted that those leaders who had so deeply and widespread abused human rights in the name of their ideologies could be held responsible and face an international tribunal.
Today, in a much less charged environment, we are facing a reverse ordering of the process. Efforts by states involve drafting national legislation about the rights and obligations of persons or entities engaged in global digital behavior. The questions of adequate policy development engagement, appropriate legislation, and global reach are acute today, regarding policies around behavior in the local and global digital spaces of the Internet ecosystem. [44]
While the vision of the Internet was one of "free and open to all," self-interested stakeholders have used the language of "digital disruption" and "unregulated innovation" to engage in digital business practices at the expense of digital rights. [45] In so doing they enabled and established digital exploitation and slavery. Going forward, we are in the quest for proper legislation and regulations that preserve a balance between protecting digital rights and incentives for promoting innovation.
One important question is whether past offenses against human rights in cyberspace should go unpunished, or uncompensated for, because of a lack of legislation in place at the time. Do those who personas have been harmed receive no compensation while the perpetrators continue to build empires base on their ill-gotten gains? Are we to forgive and forget or should we aim for international digital crimes tribunals and restitution? These issues are not explored here.
Crimes of Omission
Many of the violations of UNHRD rights in cyberspace are based not on acts of commission, but also based on acts of omission. In order to enable and sustain predatory digital business practices, the necessary technological tools that protect the privacy and security of digital citizens have not been implemented. Much of data security has to do with the holder of the appropriate data ensuring that it is not accessed by competitors or cybercriminals. Even the form and content behind the "consent button" or opt-out options on most applications are dense and, in most cases, exploitative in the extreme. Users are ill-informed about the real-life consequences of data use and persona assembly when they pushed the ever-present "I Agree" button on a digital application or web site. They have no idea of the rights they are giving ways and the forces to which they are being exposed.
Where is the line?
Article 12: No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honor and reputation. Everyone has the right to the protection of the law against such interference or attacks. [46]
Never has there been a technology, short of slavery or incarceration, more suitable for interference with a person's privacy. Other than for slavery, there has never been a process or technology that made the systematic invasion of a person's privacy and exploitation of the person the foundation of its business model. At the same time, never has there been a technology with more promise to benefit humankind and help to solve some of its central problems, here through the collection and analysis of anonymized data. Also, never has there been a technology with the potential to "violate rights, undermine privacy, polarize societies and incite violence. [47] With the rise of digital technologies, the private sector, for its own purposes, joins the state and other entities in the surveillance of citizens. The collection and processing of personal data is the basis of the digital business practices the feed the revenue and growth of the globe's largest digital and, increasingly, non-digital entities.
The drafters of the UDHR could not have foreseen the importance and relevance the extension of privacy beyond one's limited personal data, the access by others to family, work and community data, or the emergence of such data-intensive technologies such as social media, email, web-browsing and resulting data mining and digital persona assembly.
By the word arbitrary, Article 12, (just as Article 9), does not see the right to privacy as absolute. Interference with the right to privacy must be necessary, legitimate and proportional. Nor did it foresee personal data and constructed digital personas as property that could be traded, bought or sold without its owner's permission. This is akin to being lured or seduced into selling oneself into digital slavery. The questions to ask at both the legal and ethical levels are:
What are our digital privacy and ownership rights to our own data and our digital personas?
What are our obligations to respect the digital rights of others?
How do practices and boundaries get shaped by legislation laws and by ethical standards of behavior?
The special role of privacy in Cyberspace
Privacy assumes new importance in cyberspace. In a pre-digital world, privacy was easier to maintain, and the right to privacy was respected by laws and regulations. Laws were in place in cases where it was necessary to infringe on it. Much of one's time spent, and business, community and social activities on multiple fronts went unnoticed and unnoted, offering little scope for information retrieval. Even forensics tended to work with minimal data sets. The emergence of cyberspace has opened previously unknown possibilities of mass data accumulation, and surveillance by various parties. Every keystroke, finger swipe on a cell phone, location data, and every footstep is collected, stored, identified and tagged. This opens a wide field to behavioral manipulation, surveillance capitalism and endless exploitation. Commercial misuses of personal data and the construction of use-specific personas are taking place without meaningful user permission and consent and without oversight.
How such mass data should be treated is very much a subject for discussion. What should be protected as private depends on proposed use models? What for one party represents mass surveillance, social engineering, and manipulations, other parties see as the imperative and opportunity to prevent crime, heal the sick, or support innovation. Even more technical applications such as Artificial intelligence can make decisions about which among the neediest in society receive social welfare benefits. Yet, AI is also seen as in the service of solving humankind's most pressing problems. Digital technology puts who does what with what data at the center of a society's concerns about personal and social integrity and progress. The resulting agenda of concerns should be set at the center of society's policy discussions.
Fighting the Bias: Technological Solutions for Technological Problems?
Encryption ensures the privacy of personally generated data, but it does not solve the privacy dilemma for two reasons. Much of one's data is generated beyond one's direct engagement, and authorities will always insist on the need for back door access to data for reasons of security. One good example here is the refusal of Apple to help the US DOJ to unlock two iPhones used by the Saudi shooter who killed three people at a Navy base in Pensacola, Florida. Recent developments show a principled stance of a company does not mean that it will last under pressure. [48] Like encryption, anonymized personal data requires that the digital citizen trusts and authorizes some competent digital governance institution, based in the state, private sector or otherwise, to oversee and control the processes. This is further complicated by the fact such "anonymized" data can easily be reconstructed using just a few known data points. AI-enhanced techniques, facial recognition software, and other tools can even more easily reconstruct identifiable identities for various uses. Both the literal and digital citizens have no control over the algorithms used or the intended uses of such digital personas. While this is a technology-enabled problem, there is no technology-enabled solution to either the data privacy or the digital personas issues. The solutions will always require a blend of policy-based governance and the development of trust around acceptable social norms of behavior by all involved in the cyberspaces of the Internet ecosystem.
We have identified universality and inclusivity as fundamental characteristics of cyberspace and the need for digital governance models that recognize and enable the participation of everyone, from everywhere, in open policy-making processes. We looked at the need for and the attributes of independent and competent tribunals, but every political or technological solution will always require human trust. Trust involves mutual understanding and respect, attitudes that flow from open engagement in dialogue among diverse stakeholder digital citizens.
Is it surprising to see how fundamental principles of the UDHR, such as universality and all-inclusiveness, find their reflection in fundamental principles for governing digital technologies? In many respects, we are confronting the same issues as confronted in the drafting of the UDHR, albeit in the cyberspaces of the Internet ecosystem. The goal is to be able to translate them into the principles, policies and practices that govern behavior in the digital era.
Cyberspace will always have contested areas, and the decisions around policies and practices will be an ongoing, much as ongoing case law enriches the meaning and understanding of legislated laws and policies. Fundamental principles are always navigational aids and aspirational. For the next steps on the way to enshrining fundamental principles for the rights of our digital citizenship, we have to establish legitimate mechanisms for digital governance and inclusive mechanisms created by and for "We the people..." in both a literal and a digital sense.
In part 4 of this series, we will discuss Articles 13 – 17 as they apply to political rights and will explore topics such as the notions of the responsible and accountable cyberstate and empowered digital citizenship.
[1] The authors contributed this article solely in their personal capacity, to promote discussion around the UDHR, digital rights and digital citizenship. The authors would also like to thank Sarah Deutsch for her valuable contributions to the article. The authors can be reached at klausstoll@thebrocasgroup.com and sam@lanfranco.net>.
[3] This series of article are presented a bit like preparing the foundation for a house, here the house is the "house of regulations and rights" in the digital age. An understanding of the desired digital rights, and the pitfalls from policy and regulation, is required to build a sturdy and relevant platform of digital rights.
These articles are also a contribution to the upcoming 75th UN UDHR anniversary and as a start of an Internet ecosystem wide discussion around digital rights and policy development. Comments are welcomed. (Send comments with "UDHR" in the subject line to klausstoll@thebrocasgroup.com ). Comments will be used to update this digital rights discussion in subsequent articles. The goal is to kickstart progress toward a much-needed International Covenant on digital Civil and Political, Economic, Social and Cultural Rights.
[4] Margaret Edith Brett, The Right to Recognition as a Person before the Law and the Capacity to Act under International Human Rights, page 9, LLM in International Human Rights Law, Irish Centre for Human Rights National University of Ireland, Galway August 2012, https://www.chiark.greenend.org.uk/~chrisj/Right...
[5] Geraldine Van Bueren, The International Law on the Rights of the Child (Martinus Nijhoff 1995), 40; Manfred Nowak, U.N. Covenant on Civil and Political Rights: CCPR Commentary (2nd revised edn, N.P. Engel 2005), 369.
[6] We have discussed our digital persona in Part 1: "The advances brought by digital technologies have created a new multi-faceted dimension to our digital personas. Our physical persona is something that is gifted to us by our birth. Our digital personas are created by digital technologies. In parallel to our physical persona, with very few exceptions, all people are simultaneously acquiring multiple digital personas. They consist of digital data constructs (personas) that are linked to our unique literal being as a human. Often attached to these digital personas are human or machine-imposed value judgments affecting one's real world reputation, personal information, credit or risk worthiness, and other indicia affecting basic human welfare." For further information see section "The Digital Persona" in part 1.
[7] This area is more critical in the face of greater use of digital/Internet shutdowns to deal with domestic issues.
[8] We have discussed this before in Part 1 under the heading "Liberty".
[11] This is one of the main distinctions between the DNS search and the use of search engines where algorithms always contain elements of human bias, discretion and ambiguity.
[12] The topic of cyber law, and its future development, is young, broad and Important. It merits careful attention in order that it not codify regulations that impact adversely on the digital rights of persons.
[13] For more information on the international context of cybercrimes treaties and initiatives see footnote 29 below.
[14] For example, one of the major issues between States is the application of International law to Cyber Warfare. See: Tallinn Manual on the International Law Applicable to Cyber Warfare. en.wikipedia.org/wiki/Tallinn_Manual
[15] The British Computer Misuse Act 1990, is an early example, for further information see; https://en.wikipedia.org/wiki/Computer_Misuse_Act_1990 A recent example is the EU General Data Protection Regulation, GDPR, for further information, see: https://ec.europa.eu/info/law/law-topic/data-protection_en
[19] See https://digitallibrary.un.org/record/799853 "In its resolution, the General Assembly requested that a group of governmental experts be established in 2014, on the basis of equitable geographical distribution, to continue to study, with a view to promoting common understandings, existing and potential threats in the sphere of information security and possible cooperative measures to address them, including norms, rules or principles of responsible behaviour of States and confidence-building measures, the issues of the use of information and communications technologies in conflicts and how international law applies to the use of information and communications technologies by States, as well as the concepts aimed at strengthening the security of global information and telecommunications systems. The Group was also asked to take into account the assessments and recommendations of a previous Group (see A/68/98). The Secretary General was requested to submit a report on the results of the study to the Assembly at its seventieth session."
[20] For further Information see footnote 13 above. The GGE experts met in a closed-door format with no observers permitted. The work of the GGE is further limited by the mandate of the General Assembly "mandates the work of the GGEs squarely in the realm of international security and disarmament, and thus not as a technical exercise". The GGE also "decided that the issues not under the purview of the First Committee-such as espionage, Internet governance, development and digital privacy- are not the focus of the Groups work".
The Open-Ended Working Group started in June 2019 and is open to all UN member states. The OEWG holds consultative meeting with Other stakeholders from the private sector, civil society and academia which also can apply to attend the meetings. The OEWG addresses six substantive issues: 1. Existing and Potential threats; 2. International law; 3. Rules, norms and principles; 4. Regular institutional dialogue; 5. Confidence building measures; and 6. Capacity building. The aim of the OEWG is to develop reports on a consensus basis.
[26] For example the Paris Peace Forum, https://parispeaceforum.org/; or the Geneva Dialogue, https://genevadialogue.ch/ , one of the more enlightened and promising initiative is the Internet & Jurisdiction Policy Network, https://www.internetjurisdiction.net/
[27] For example the Internet Corporation for Assigned Names and Numbers ICANN, https://www.icann.org/ (domain name industry); the World Economic Forum, WEF, (https://www.weforum.org/ ,neo-liberal capitalism); Tim Barner-Lees "Contract for the Web", https://contractfortheweb.org/; the Cybersecurity Tech Accord, https://cybertechaccord.org/ ; Cyber Peace Institute, (CPI), https://cyberpeaceinstitute.org/ , Global Forum on Cyberexpertise, (GFCE), https://www.thegfce.com/.
[29] A good example for the distinctiveness but inseparability of the physical and digital is climate change. The infrastructures of the digital realm require vast amounts of energy, resulting in greenhouse gases. Digital technologies such as AI, blockchain and the cloud require large amounts of energy and contribute to pollution. Potentially, these same technologies can become major factors to overcome the problems they cause. The achievement of the UN's Sustainable Development Goals depends to a large extend on the strategic deployment of digital technologies, the same digital technologies that improperly used will aggravate the underlying problems.
[34] We have identified already a number of these general principles, for example the 3 e's and the dual separate but inseparable character of Cyberspace.
[37] The current "ombudsman" model deployed by ICANN, although useful and skillfully diploid, is floored the very same reasons. See: https://www.icann.org/ombudsman/
[39] This example applies to both the CASE Act where the US Copyright Office is intending to create a special copyright take down tribunal, but it also may apply in Europe because the EU Copyright Directive mandates that Member States create "out of court redress mechanisms" to settle copyright disputes. In the US, some of these forced arbitration provisions have been found to be unconscionable and violate citizens' due process rights.
[42] "The UDHR was being drafted just after the Nuremberg war crimes trial had ended, with a similar trial still under way in Tokyo. Article 11's respect for the presumption of innocence was agreed on quickly. The drafters struggled over the wording of the second paragraph. They were concerned that a ban on retroactivity could be used as an argument that the Nuremberg trials had been illegal. They had tried for "crimes against peace" and "crimes against humanity" which previously did not exist in national laws. From: Universal Declaration of Human Rights at 70: 30 Articles on 30 Articles - Article 11, https://www.ohchr.org/EN/NewsEvents/Pages/DisplayNews.aspx
[44] The reference to the UDHR and the Nuremberg Trials does not compare the incomparable of the Holocaust with the issues around digital rights, due process and digital exploitation. We nevertheless can and should take history as lessons learned and inspiration. We can use it as a navigational and interpretive aid in a new circumstance, the global Internet ecosystem, where the local, the global and the personal present complex policy challenges. We are in a time where digital exploitation and exclusion are pressing issues. We can look to the UDHR for what it says to us today, how we can react and prevent cyberspace from being used for repression, exploitation and exclusion
See also footnote 19 in Part2 on Digital Slavery
[45] One of the most prominent and insistent proponent of the permission less innovation ideology, Vint Cerf, had to admit its limitation: " All the openness led to what many of us call permission less innovation, all of which was very satisfying for me, watching this grow in a very organic way. There is only one small little detail that had not penetrated my thinking in the early stages and that's: What happens when the general public gets access?" Vint Cerf, US IGF Washington 2017, https://www.youtube.com/watch?v=J4HxqfJK13I
[46] Article 12 of the UDHR finds it equivalent in Article 17 of the "International Covenant on Civil and Political Rights.
The subject of digital privacy was subject of several discussions over the years. The United Nations Human Rights Committee on the right of privacy, family, home, correspondence, and protection of honor and reputation, under the International Covenant of Civil and Political Rights, (ICCPR), expressed in 1988, demands in General Comment No. 16 to Article 17 that state surveillance be subject to laws legality through clear and precise law that safeguards a citizen's right to privacy. The United Nations General Assembly Resolution 68/167, on the right of privacy in the digital age", passed on December 18, 2013.
Additional action where taken within the UN framework: Resolution 68/167 included a request of the general Assembly that the High Commissioner for Human Rights prepare a report on the right to privacy. The Human Rights Council based on its decision 25/117 held a panel discussion on the rights of privacy in the digital age, in 2014. The council also appointed in 2015 based on its resolution 28/16 a Special Rapporteur on the rights of privacy for a 3-year period. We should also mention the recent Report of the UN Secretary-General's High-level Panel on Digital Cooperation, "The Age of Digital Interdependence in the age of digital interdependence". https://www.un.org/en/digital-cooperation-panel/ . All this effort shows that whilst the UN is taking the subject of digital privacy serious, it has no powers to ensure and implement the right to privacy beyond making recommendations and hope for the compliance of states and the private sector.
[47] page 12, Report of the UN Secretary-General's High-level Panel on Digital Cooperation.
There is a degree of dread in the investor community that prized domain names will be forfeited to trademark owners in proceedings under the Uniform Domain Name Dispute Resolution Policy (UDRP). Since the UDRP has no internal appeal mechanism to correct errors of law or judgment, the sole recourse is an action in a court of competent jurisdiction as spelled out in UDRP paragraph 4(k). In the U.S., this would be a district court under the Anticybersquatting Consumer Protection Act (ACPA). The "dread" of forfeiture is not an imaginary concern; it has a real basis stemming from a number of negative experiences with UDRP awards that have only been "corrected" by commencing actions and settling claims of reverse domain name hijacking in ACPA actions. I discussed settlements in an earlier essay, Prudential Settlements for Alleged Cybersquatting/ Reverse Domain Name Hijacking under the ACPA. Settlements are one thing, but what if there are genuine contested issues that resist settlement and proceed to summary judgment, trial and judgment? What precisely are the remedies, are they automatic to the prevailing party, or if not, what proof is sufficient for attorney's fees and statutory damages?
There is (I think) a mistaken belief in the investor community, formed perhaps as a rationalization should the dread be actualized, that the prize for prevailing in an ACPA challenge to a UDRP award includes not just injunctive relief but also attorney's fees and statutory damages. This belief should be shelved. A look back at the few ACPA cases that have been tried to judgment (see other ACPA essays on <circleid.com>) and particularly the latest experience in Black v. Irving Materials, Inc., 17-CV-06734-LHK (N.D. Cal 2019/2020) should settle the question. The not-surprising answer is that remedies must be earned. Plaintiffs get no more than they can prove; and if they lack proof, there is nothing to get. There were a number of motions in Black before and after trial. While an advisory jury did rule in Black's favor that he registered <imi.com> in good faith, that the registration was (to use the ACPA term) "not unlawful", this marvelous result came at a significant monetary cost: Black's motion for attorney's fees was denied. The reasons are important and discussed below. He was ineligible for statutory damages because the claim that would have supported the remedy was dismissed from the complaint in an early motion.
As a general observation, Panels appointed to arbitrate under the UDRP mainly reach the right result — approximately 92% to 93% of cybersquatting claims are indefensible and generally not defended — but within the other 7% to 8% a good number of accused respondents are registrants who have superior rights. For further context, UDRP awards are rarely challenged. Few have actually gone the distance to summary judgment or trial in U.S. federal courts, but for those that have the decisions sketch out the disappointments under the ACPA. When the challenge moves to federal court standards and remedies change. The evidentiary demands tighten. Unless complaints are well-drafted, some claims (the kind that must necessarily be sustained for statutory damages and attorney's fees) are not likely to survive motions to dismiss.
The basic remedy under both the UDRP and the ACPA is injunctive relief: in the simplest terms, the penalty for cybersquatting is forfeiture (UDRP and ACPA); and relief against UDRP error for reverse domain name hijacking is restoration of the domain name to registrant (§1114(2)(D)(v)). If a registration is found to have been lawful under this section, there could have been no violation of §1125(d) (the ACPA). Under the Lanham Act, though, there are two additional remedies, namely attorney's fees (§1117(a) and statutory damages (§1117(d)). However, unless the companion provision to §1114(2)(D)(v) is pled (that is (D)(iv)) and survives dismissal, there can be no statutory damages. To prevail under §1114(2)(D)(iv) requires proof of "knowing and material misrepresentation," in essence a fraud claim; otherwise, the only remedy is injunctive relief for reverse domain name hijacking, but the prevailing party is not automatically eligible for attorney's fees and can be shut out of statutory damages.
In Black the Court dismissed the (D)(iv) (fraud) claim. In practical terms, the best Black could hope for was recuperation of his attorney's fees, but to prevail on attorney's fees, the case must be exceptional. The Supreme Court defines "exceptional" as the "rare case" that "stands out from others with respect to the substantive strength of a party's litigating position (considering both the governing law and the facts of the case) or the unreasonable manner in which the case was litigated". Octane Fitness, LLC v. Icon Health and Fitness, Inc., 572 U.S. 545, 555 (2014). It will be noticed that there are two elements: there must be "substantive strength" to a party's litigating position and the manner in which the case is litigated. Either weakness of a party's litigating position or the unreasonable manner in litigating the claim can sink a motion for attorney's fees. Black lost on both counts.
Two investors in earlier cases have been caught with unlawful registrations and marked as cybersquatters, Gregory Ricks (once as plaintiff and once as defendant) — Ricks v. BMEzine.com, LLC, 727 F.Supp.2d 936 (D. Nevada 2010) and Bulbs 4 E. Side, Inc. v. Ricks, 199 F.Supp.3d 1151 (S.D. Tex., Houston Div. July 18, 2017) — and David Lahoti (Lahoti v. Vericheck, Inc., 636 F.3d 501, 510-11 (9th Cir. 2011). The Court found Lahoti a "rare" case. It was exceptional because
Lahoti's willful registration and use of the www.vericheck.com domain name, his "attempt to extort thousands of dollars from Vericheck," his disregard for Vericheck's trademark rights, his "pattern and practice of cybersquatting, including a pattern and practice of abusive litigation practices," and his "disregard for the submission of inaccurate answers to interrogatories." Having found the case exceptional, the district court ordered Lahoti to pay Vericheck $78,109.95 in fees and costs.
But, in BMEzine.com, the Court denied the defendant's request for attorneys' fees because it has "shown only that Ricks is not entitled to prevail on any of his claims against the LLC. Defendant has not established Ricks' conduct was malicious, fraudulent, deliberate, or willful as to Defendant LLC with respect to the claims upon which Defendant LLC has prevailed at this stage of the proceedings."
When we turn to domain name holders as plaintiffs or defendants, the stake is higher because litigating in federal court is expensive, and even if the plaintiff is right, the cost could be prohibitive. One should distinguish good faith registrants (investors and other legitimate businesses) from bad faith registrants (properly labeled cybersquatters). The latter as plaintiffs have incurred either or both attorney's fees and statutory damages. Ricks, for example in BMEzine, but several other domain name holders having no actionable claim rolled the dice and paid the piper. But, good faith registrants are in a predicament: unless they are able to recoup their legal fees, there will be nothing to offset the cost of litigation. In Black, attorney's fees (allegedly close to $500,000 dollars) went unrecouped; the pleasure of prevailing and the unhappiness of paying for it. In another action still pending, Dent v Lotto Sport Italia S.p.a, cv-17-00651 (District of Arizona) plaintiff will be seeking attorney's fees, and the same fate or happiness may be waiting since in that case also the (D)(iv) fraud claim was dismissed early in the case.
It is not all bad news, but the alignment of facts has to support either or both remedies, attorney's fees, and statutory damages. In AIRFX.com v. AirFx LLC, CV 11-01064-PHX-FJM (D. Ariz. March 7, 2013) and Joshua Domond and Harold Hunter, Jr v. PeopleNetwork APS d/b/a Beautifulpeople.Com, Beautiful People, LLC, Greg Hodge, and Genevieve Maylam, 16-24026-civ (S.D. FL. Miami Div. 11/9/17), and No. 17-15222, Non-Argument Calendar.(11th Cir. 2018) the domain name registrants prevailed in UDRP proceedings and prevailed again as defendants in the ACPA actions: on the uncontested facts, plaintiffs had no actionable claim, but they sued anyway which is one of the foundations for attorney's fees, and on those grounds, the courts awarded attorney's fees in both cases.
In Airfx, attorney's fees which amounted to over $100,000 dollars were imposed because
[A.] defendant's counterclaims were groundless and unreasonable.... [And B] defendant's trademark infringement counterclaim was groundless and unreasonable. The claim was groundless because the defendant did not present any evidence that plaintiffs' use of the AirFX mark was commercial.
To this was added an additional strike in that plaintiff's litigating manner (per Octane Fitness) was clearly unacceptable to the Court:
On August 24, 2012 we granted plaintiffs' motion for summary judgment on defendant's counterclaim. Neither the factual basis for our conclusion, nor the law compelling it, were genuinely subject to dispute. On October 20, 2011, before defendant filed its counterclaims, we informed the parties that the original registration date of airfx.com was the determinative issue in connection with any ACPA claims.... [While] t is unclear from the record whether defendant knew the original registration date of the airfx.com domain name at the time it filed its counterclaims ... by the time the parties filed their motions for summary judgment, it was undisputed that airfx.com was originally registered more than two years before the AirFX mark existed. It was unreasonable for defendant to pursue its ACPA counterclaim once it discovered that the airfx.com was originally registered before the AirFX mark. (Emphasis added)
From this, it is reasonable to draw another general observation that should alert plaintiffs to proceed with caution before filing an ACPA claim: the facts must align with their theory of the case. It appears from the Court's trial decision that Black had damages, but they were inadmissible as evidence for statutory damages since the Court had dismissed the (D)(iv) claim. On the final motion, for attorney's fees, the Court cited Octane to the effect that "Black's success on the subsection (v) declaratory relief claim does not independently render the instant case exceptional" and explained the reasons why:
Irving notes that throughout the instant case, "this Court has clearly and unequivocally warned [Black] that he does not have either a claim for reverse domain name hijacking or any other affirmative claim against" Irving. Opp'n at 19. Notwithstanding this fact, Black repeatedly "ignored this warning," which militates against the award of attorney's fees in the instant case. Id. The Court agrees. The "unreasonable manner in which the case was litigated" by Black is an equitable factor that militates against the award of attorney's fees.
There is surely a lesson here; there must be both proof and decorum. The Supreme Court held that it matters, and the Courts in Airfx and Black underlined the consequences. The other lesson is this: in order to prevail in both protecting one's assets and coming out whole in money terms, domain name plaintiffs must earn their right to attorney's fees and statutory damages. By earned, I mean having the factual evidence to support the remedies. These observations about earning the desired result, incidentally, apply equally to UDRP proceedings. Panels assess rights based on the submitted record; if there's a stumble in their assessment, it can come in applying the wrong factors and giving the complainant-trademark owner the benefit of the doubt on the issue of bad faith.
Written by Gerald M. Levine, Intellectual Property, Arbitrator/Mediator at Levine Samuel LLP
In 2019 under the aegis of the Internet Governance Forum, a pilot project was conducted into the causes of and solutions for the, in general, slow deployment of internet security standards. Standards that on mass deployment make the Internet and all its users safer, indiscriminately, immediately.
The report
Recently the report 'Setting the standard. For a more Secure and Trustworthy Internet. The Identification of Pressure Points in Society to Speed up Internet Standards Deployment', was published on the IGF website. Information was gathered by means of an international survey, breakout sessions at the IGF, dozens of interviews with stakeholders and desk research. It focused on two questions: 1) What are the reasons for slow deployment? and; 2) What are solutions to speed up deployment? This showed that underneath all other provided reasons lies a collective action problem. To break out of this state of inertia, 6 recommendations, 25 identified pressure points in society, and 7 action plans are presented — including identified stakeholders who have to be(come) involved to have a chance at success in speeding up deployment.
Six standards
The project took six standards as examples to start the discussion, three internet standards by the official definition, DNSSEC, RPKI and bcp38 and three not: OWASP top 10, ISO 27001 and the Safe Software Alliance principles. For ease of writing and reading, all are called internet standards within this context.
Causes
Many participants agreed on the main cause for the slow uptake: the lack of a business case. If there is no demand, in general, there's no offer. Research showed that there are underlying causes. The report shows that there is a lack of pressure on decision-makers; from the sides that matter. As far it was able to ascertain and no one pointing to another conclusion, there is no(t enough) pressure from laws/regulation, media, or consumer organisations. As one of the interviewees stated: "No one cares if you deploy and no one cares if you don't."
Additionally, the overwhelming majority of consumers are not willing to pay for security measures, while/because of not understanding the implications of insecurity. The entrepreneurs willing to deploy, face a negative business case, or operate in a niche market.
Another important conclusion is that it is not (just) technically proficient employees deciding on deployment of the standards. Yet, outreach from the technical community is often aimed at these people. Unfortunately, not reaching the level of success needed to make the Internet safer, as they do not decide on deployment. This calls for different aims and for a change of narrative. It is the owners, board members, financial officers who need convincing. That may take pressure from other stakeholders to achieve change.
Governments have not taken internet standards into law (ISO 27001 is a voluntary exception), as is the preferred situation of nearly all we've spoken to. At the same time most of the efforts of governments (agencies) but also e.g., banks concerning cybersecurity are aimed at the only stakeholder with limited power where deployment of standards is concerned: the consumer or "user" as the internet industry prefers to call its customers. In other words, there are no carrots and no sticks of any kind, making it far worse than having no business case.
Collective Action Problem
All this results in a collective action problem, where there is no demand and no incentive to change behaviour and deploy the Internet standards. Usually, it is the government that society looks towards for solutions. In many sectors, this is completely normal and accepted behaviour. Health, (air)traffic , agriculture, etc., etc.. A question in need of an answer is, what makes the Internet so different and justifies the absence of governments, while the market cannot solve the enormous security challenges facing it? Perhaps it becomes necessary to look at the problem as a (digital) health issue. What perspectives does that provide to act upon?
This report does not answer these questions. It searched for potential solutions and pressure points in society that can contribute to breaking up the collective action problem. A few examples are presented below.
Recommendations
The six recommendations are an accumulation of advice provided. Although there is a near consensus among participants that action is needed, there is no consensus on the precise way forward. The first five were tested in the breakout sessions (number 6 came out of the sessions) at the IGF and are seen as sensible.
1. 'Create a business case for the deployment of internet standards.'
2. 'To deploy internet standards successfully, they need to be incorporated by reference into law or legally binding regulations, including a designated regulator.'
3. 'To deploy internet standards successfully requires building security by design/default into products and services.'
4. 'All stakeholders should collaborate on coherent strategies for multilingual awareness-raising of internet standards and their effect on internet security.'
5. 'Internet standards and architecture must become part of education curricula.'
6. 'Standardisation processes are advised to include a consultation phase with government and industry policy makers and civil society experts.'
The paradox this report bares is that a large proportion of the participants see legislation as the only option to force the industry into deploying, yet no one wants it. As legislation is seen as the least desirable option, this comes with a moral obligation to step up on all others. No legislation can and may not equal non-deployment. Hence the pressure on those having to deploy needs to be created elsewhere. The report mentions 25 options, from parliamentarians addressing the issue to industry, to consumer organisations testing ICT services and products, from regulation to media publications.
Next steps
Where deployment of standards is concerned, a government can take on a few roles. Standards could be demanded by them through procurement. Standards could be demanded on the basis of duties to care. A question in need of an answer is what regulators can achieve on the basis of current laws, whether telecommunication, privacy, consumer, etc.. When all else fails, the government is the legislator, but even then, cooperation is of utmost importance.
Mistrust of governments is one of the reasons mentioned why the technical community remains more or less aloof from other stakeholders that could play a role in making deployment happen. It is of the greatest importance that these others understand what internet standards are, why they exist, how they are made, and what the importance of deployment is for a more secure internet. To ensure that the future measures are the right ones, interaction is key. Hence the reason this report invites IETF en ISOC to participate actively in the next phase and assist in the creation of a change of narrative and the direction of outreach, to prevent legislation where possible. Their role lies in leading the other stakeholders forward and to make plausible deniability of not having heard of Internet standards in need of deployment impossible — at the highest levels of industry and society at that. Why? The decision to deploy seldom is a technical decision but a financial one, an investment (without return). This calls for a different approach and narrative.
All this translates into seven actions that you can find in the report. To massively deploy internet standards is and will be a herculean task involving many stakeholders with different and most likely competing interests. Deep down, however, all stakeholders around the globe have the same interest: not to be hacked, not to have compromised or lost data, not to lose money, etc.. This is a starting point. And, when all is said and done, all will have to pay for security. That goes without saying.
Conclusion: a no-brainer
Ideally, this report is not the end but a beginning. To start work on deployment by enacting the recommendations and gather the stakeholders in the action groups. The IGF is a neutral platform where all involved are equal. The first and most difficult steps can be conducted here before the results are taken outside of the IGF to be implemented. All with one aim: to make deployment of security raising standards a no-brainer for all involved.
This is Part 4 of a series of articles published (here in CircleID) on the UDHR and human rights in the cyberspaces of the Internet Ecosystem. [2] Here we discuss Articles 13-15 and touch on other topics such as the role of cyber governance, empowered digital citizenship, and whistleblowers. [3] At this point in this series of articles on the UDHR in the digital age, it is useful to pause and remind ourselves of the purpose of this analysis. The exponential growth of digital cyberspace and the Internet ecosystem has both opened new exciting virtual territories for human activity and has disrupted many elements of society's existing (literal) human social order. At the same time, it has produced major tears in society's social fabric and posed challenges to the underlying social contract. [4] In this series of articles we try to do several things.
First, we looked at the UDHR and its underlying principles to see what guidance the UDHR might give to define, or at least point the way, to formulating the principles and policies that support the rights and duties of digital citizenship. For some aspects, the guidance around protected rights is straightforward. Other areas are unique to the global span of the Internet ecosystem and demand fresh thinking and fresh approaches.
Second, our intention is to explore possible mechanisms for pursuing possible ways forward. There may be neither a unique path nor a unique arrangement of mechanisms. There is, however, a unique starting point --one enabled by the scope of the Internet ecosystem. That starting point, aspirational at this point, begins with engaged digital stakeholders, with the broader goal of promoting engaged digital citizenship. With all its pitfalls and its uses by those wishing to attack democracy and democratic processes, the Internet ecosystem offers a venue for democratic multistakeholder engagement in policy and decision-making processes that was heretofore unimaginable. Internet governance for stakeholder engagement will likely include some blend of national, international, and multilateral structures and processes. It should be formulated using a multistakeholder process.
The Internet ecosystem has changed reality in ways more profound than the changes from the Industrial Revolution in the late 19th Century to the early 20th Century. Those changes were not recognized at the end of WWI when the Treaty of Versailles imposed peace conditions that contributed to almost half a century of terrible consequences. [5] While the principles contained in the UDHR may be robust and durable, the context has changed dramatically, as has the speed of change.
This calls for an ecosystem approach and not a "whack-a-mole" symptomatic approach to issues surrounding the rights and duties of one's presence and residency in the Internet ecosystem. It calls for an engaged stakeholder approach that combines progress in governance and regulations with the rebuild of appropriate social fabric and social contract.
Finally, these series of articles are meant to contribute to the upcoming 75th anniversary of the UN's Universal Declaration of Human Rights, and prompt an Internet ecosystem-wide discussion around digital rights and Internet ecosystem policy development. [6] The goal is to kickstart progress toward a much-needed International Covenant on digital Civil and Political, Economic, Social and Cultural Rights.
Historical Context
The UDHR was drafted in an important historical period. It was written during the time of the persecution and the mass migration European Jews, the refusal of the world's nations to grant migrants asylum, British limitations on Jewish immigration to Palestine, civil war between factions in Palestine, the resulting two-state solution proposed by the UN in 1947, and the founding of the State of Israel in 1948. [7] The question now is what do the Articles mean at this moment in time, as persons and other entities (communities, companies, governments) take up residence (migrate) to the cyberspaces of the Internet ecosystem. Residence within the cyberspaces of the Internet ecosystem is simultaneous to maintaining residence within nation-states. We cannot simply translate the UDHR into cyberspace. On the other hand, there is no need to redraft them for the digital age, as our fundamental human rights remain the same, and the challenge is how to apply them in a new context. We must start from the principles behind the UDHR and use them as navigational aids. We should look to the UDHR to help us understand our rights and obligations in cyberspace and how to build respect for the digital dignity and rights of others. We must also examine what needs to be codified into formal covenants with regard to rights and duties in cyberspace, and what needs to become part of the social fabric and underlying social contract.
Article 13: (1) Everyone has the right to freedom of movement and residence within the borders of each state.
Article 13: (2) Everyone has the right to leave any country, including his own, and to return to his country.
Central to any discussion of Articles 13-15 regarding rights and duties is to revisit the notions of nation and state, as they related to the cyberspaces of the Internet ecosystem, a subject previously discussed in Part 1 of this series. [8]Article 13: (1) is set within the framework of statehood, whilst Article 13: (2) is set within the notion of country.
Cyberstate Basics
Within cyberspace, we are living in technological and social constructs and virtual territories. These were initially the web sites we visit and the social platforms (email, social media) we use. Increasingly, they now include the growing universe of the Internet of Things (IoT) with its immense tracking and data archiving. Each of those online spaces could be compared to nations in terms of their process and data control. Many digital tech companies and their high-level representatives act in their relationship with states as if they are nations in their own right [9]. Like nations in the real world, digital territories are influenced and defined by political, economic, geographic, ethnic, religious, language factors. What are, or should be, our rights and obligations as digital citizens within the digital territories of cyberspace?
Our digital residence in the cyberspaces of the global Internet ecosystem stands in marked contrast to our digital residence where we reside. Governments have sovereignty and authority over the domestic cyberstate. Persons and entities have a state defined digital citizenship and residency. They also now have a nation-like digital residence in the global Internet ecosystem. However, cyberstate governance, commonly called "Internet Governance" (IG) is in its infancy at both levels, in terms of what it is and what it covers.
Approaches to sovereignty in Cyberspace go back as far as 1996 when John Perry Barlow published his "Declaration of the Independence of Cyberspace". [10] "We will create a civilization of the Mind in Cyberspace. May it be more humane and fair than the world your governments have made before."
The current state of Internet governance, in its infancy, has not reached the status of a cyberstate with defined mechanisms of cyber-governance. As well, the Internet has disrupted the behavioral norms of the prevailing social constructs of the late 20th Century, resulted in major tears in society's social fabric and damage to the underlying social contract. Repairing the social fabric, and agreeing upon a new social contract base, is an essential complement to the development of both digital governance and digital citizenship. This is a pressing priority of the moment.
A just state is built by the political will and engagement of its citizens. To construct the layers of the cyberstate, from local to global, will require shared aspirational goals and vision across stakeholders. The steps needed to create a system of digital governance within a country's cyberstate are rooted in the state's Constitution and associated institutions for policy development, and their implementation in the literal world. That work is challenging and will benefit from starting with the principles underpinning the UDHR.
The power and legitimacy of the cyber governance stem from the recognition of a state's sovereignty and its right to govern domestic cyberspace. [11] Within one's country citizenship, national digital citizenship comes under the governance of that domestic cyberspace. At the same time persons and entities have a global residence in cyberspace, and may have local residences in other countries. [12] This raises the issue of digital migration, and one's ability to change digital residence across states and governments, at will. At the same time, this leaves open our understanding of what digital citizenship means at the global level. ICANN, responsible for the security and stability of the global Internet, has a motto that states: "One World, One Internet." What that means in terms of global digital citizenship, domestic digital citizenship, and cyberstate governance is yet to be worked out. Ideally, this will be determined, consistent with and with help from, the principles in the UDHR.
Cyberspace Residence Requires Empowered Digital Citizenship
Article 13: (1) gives everyone the right to freedom of movement and residence, within the borders of a state. Residence and citizenship are not necessarily the same so Article 13 does not address rights and duties regarding citizenship. Residency in cyberspace operates both within the nation state, and globally outside the nation state. [13] Ideally, there should be only one set of cyberstate policies and regulations, one digital citizenship for all. However, nation-states can and do distinguish between residence and citizenship. They may have different policies for each, policies that also differ from those of other nation-states. At the global level, that is not the case. In global cyberspace everyone is a global resident and, by extension, a global citizen. There is no way to differentiate between the two. There is no way to acknowledge global residency but deny global citizenship.
States are in the process of building their policies and regulations for national cyberspace, and for the rights and duties of national citizens and residents, virtual or literal, in national cyberspaces. This was the motive behind Facebook moving its user files out of Ireland at the start of the GDPR. Subscribers remained residents of the Internet ecosystem but no longer held that residency in Ireland.
We have now entered a period of cyberspace governance building. As countries and regions act, there is a high degree of consultation and some collaboration. Hopefully, they will be guided by principles like those in the UDHR, and policies of good governance will prevail. In Part 1 of this series, we stated: "Today, cyberspace bestows on each of us a dual, but inseparable, physical and digital citizenship. Even if we don't know about cyberspace or are unable or have decided not to use any of the digital technologies, we are still digital citizens with rights (and corresponding duties)". Access to cyberspace becomes a human right, that indirectly is enshrined in the fundamental human rights of the UDHR.
At the level of the global Internet ecosystem, the issue of global digital citizenship is more complicated on one front and easier on another. The explosive expansion of the Internet has made everyone a de facto resident and global citizen in cyberspace. The rights and obligations of global digital citizenship are yet to be defined. In addition to the key issue of what rights and obligations should accompany that citizenship, it is the key issue of who should be involved in formulating those rights and obligations.
This is where the UDHR comes in two ways. The principles in the UDHR constitute the key principles that should be enshrined in a declaration of digital rights, or more properly, the rights of global digital citizenship. Secondly, any structures of global governance are likely to be enshrined in international or multilateral treaty agreements. They are unlikely to come from some sort of overarching cyberstate. [14] It is essential for the governance of cyberspace that policymaking and enforcement tools are in place that ensure global digital citizens are empowered in the policy-making processes, are never deprived of their full rights (and duties) citizenship and enjoy a safe and secure residence in the cyberspaces of the Internet.
To delineate between national digital citizenship and citizenship within the cyberspaces of the Internet ecosystem, we use the term digital citizenship for the former and global digital citizenship for the later. In both cases, effective democracy calls for engaged citizenship, engaged digital citizenship and engaged global digital citizenship. Here our focus is on stakeholder engaged global digital citizenship.
There are state and private sector initiatives under to create ring-fences around sections of cyberspace. [15] Creating isolated cyberspaces runs against the very nature and strengths of cyberspace. Such efforts are to be resisted in that they devalue the very strengths of the Internet ecosystem as a tool for human understanding and human development. Such ring-fenced spaces would diminish global digital citizenship, establishing second-class digital citizenship that lacked access to one's rights as a global digital citizen.
Opting Out: A Conundrum
Article 13: (2) refers to the right to leave any country, including one's own, and to return to that country. At one level, this presents a simple issue. With the advances in government (services, etc.) and e-governance, it is easier for citizens to engage their governments and engage in governance. It also means that those without adequate digital access have diminished citizenship. This underscores the need to treat digital access as a public good and not just another private consumable. However, it is virtually impossible to become "non-resident" in a national cyberspace. Even when physically leaving a country, one is liable to remain subject to that country's digital citizenship rights and duties, even in exile. There are many examples that show how difficult, or even impossible, it is for many to erase their digital footprints.
Residence in cyberspace is, of course, completely composed of data, data tagged with personal identifiers. Such data goes well beyond personal data input into the data cloud by deliberate transactional actions. It includes data collected from one's behavior, as one browses and roams around the Internet. More importantly, increasingly, it also includes ambient data from one's simple presence in life. Ambient data is data from cell phones, automobiles, the Internet of Things (IoT), third-party surveillance, and a myriad of other sources. Such data, personal or not, is broadly tagged with identifiers and used to construct profiles. It is increasingly used in "black box" artificial intelligence (AI) algorithms to construct digital personas, used for marketing, monitoring, and a broad myriad of other uses. [16] While Article 13: (2) speaks about a right to leave, in cyberspace there is nowhere to go, nowhere to hide. Being a digital resident comes with the fact of being alive, with residency possibly both after death and before birth. [17] One's final wish may be to maintain digital residency forever. [18] This makes protected access to the proper rights and duties of digital citizenship all that more important.
Digital Asylum: Rights, Obligations and Duties
Article 14: (1) Everyone has the right to seek and to enjoy in other countries asylum from persecution.
Article 14: (2) This right may not be invoked in the case of prosecutions genuinely arising from non-political crimes or from acts contrary to the purposes and principles of the United Nations.
Cyberspace, by its nature, is a network of networks based on common technical standards that operate at the technical level independent of any ethical standards. With many different policies, rules of conduct and culture practices, what might be permissible in one context, may be unacceptable or the cause for persecution in another. What is considered normal and healthy in an open society may be subject to censure or punishment under a repressive regime.
The right to freedom of digital asylum may be complicated and needs exploration. If within one's digital residency one has been persecuted or prevented access, digital migration still leaves the literal person open to persecution. [19] For digital asylum to have meaning, it might have to be accompanied by physical migration.
Issues arise here. Can there be a digital asylum with some protections? Can digital crimes abroad be subject to the territorial jurisdiction of one's physical residence? [20] If a digital persona is persecuted in a digital space by entities that exert political power over that space, or if there is an inability of political powers to protect that digital person, what rights are relevant here? What does "a right to leave" mean? [21] How does a right to asylum assure a right to protection? If there is a duty in digital spaces to grant asylum what does that mean?
Also, foreign digital residency can be like dual citizenship and exist for purposes other than asylum. Digital residency may be the presence of a persona within a country despite the person not having literal citizenship. [22] What rights does the digital asylum resident have within the literal rights and duties of the host country?
While in principle the extension of a right of asylum to digital/cyber residents should exist, there is much work to be done to understand what needs to go into the rights and obligations/duties of digital residency, digital citizenship and digital asylum.
Asylum and Migration: Political Crimes and Contrary Acts
Article 14: (2) restricts the application of claims to asylum to situations where the claim is not based on a political crime or acts contrary to the purposes and principles of the UN. However, it is not easy to define what constitutes a political crime. The definition is influenced by both the context and the point of view of the beholder, whether the beholder is persons or institutions:
"...a political crime or political offence is an offence involving overt acts or omissions (where there is a duty to act), which prejudice the interests of the state, its government, or the political system. It is to be distinguished from state crime, in which it is the states that break both their own criminal laws or public international law." [23] States may define political crimes as any behavior perceived as a threat, real or imagined, to the state's survival, including both violent and non-violent oppositional crimes. Such criminalization may curtail a range of human rights, civil rights, freedoms. Under such regimes conduct which would not normally be considered criminal per se is criminalized at the convenience of the group holding power. [24] Political crimes in the context of the UDHR are considered an abuse of human rights. Asylum is the mechanism that protects human rights against arbitrary state power, be it driven by political, economic, religious or other forces. Extending this notion to the protection to one's digital residency and citizenship is one of the challenges on the global Internet policy and governance agenda.
The purposes and principle of the UN are stated in the first two chapters of the UN charter. [25] It identifies "members", "people" and "peace-loving states" that promote and encourage respect for human rights and for fundamental freedoms for all without distinction as to race, sex, language, or religion. Peoples and states are addressed here, but the ultimate focus is on the rights of the people. As in the case of political crimes and the work of the UN itself, the yardstick to measure and evaluate behavior is the UDHR. Any acts contrary to human rights are acts contrary to the purposes and principles of the United Nations. These protections need to be extended to digital personas and residency in the cyberspaces of the Internet ecosystem.
Whistleblowers and the Need for Protections:
Digital whistleblowers are an example of an area that needs further analysis and an exploration as we flesh out the rights, duties and protections regarding the integrity of digital activities. At the other end of the spectrum purveyors of miss information, "false news" and malicious information also require attention and accountability for their actions. [26] This is a complicated and muddy area so we will dwell on several recent incidents [27].
Many states view the publication of classified, or unclassified but embarrassing information, increasingly from digital sources, as not a political crime, but as a criminal activity that does not deserve the protection of Article 14: 1. [28] When it comes to whistleblowers in the context of cyberspace names such as Snowden, Assange, and Manning come to mind. [29] The question is if their whistleblowing deeds are deserving protection and literal asylum, or are nonpolitical crimes that are not subject to human rights protection.
Snowden describes his motivation clearly:
"...My sole motive is to inform the public as to that which is done in their name and that which is done against them." [30] Snowden qualifies that, saying that the disclosure of information must be justified and serve a public interest." [31] In contrast the US Government argued that the major portion of the content:
"… had nothing to do with exposing government oversight of domestic activities. The vast majority of those were related to our military capabilities, operations, tactics, techniques and procedures". [32] In 2013, Snowden was partially vindicated when a U.S. federal judge ruled the collection of U.S. phone metadata by the NSA was likely unconstitutional.
Assange and Wikileaks didn't impose criteria on which documents to publish. They publish available data from what they perceive as "powers" and let the rest of the world decide. They view Wikileaks acting as a "dropbox" to ensure that journalists and whistleblowers are not prosecuted for disclosing sensitive or classified documents. According to WikiLeaks, its goal is:
"to bring important news and information to the public ... One of our most important activities is to publish original source material alongside our news stories so readers and historians alike can see evidence of the truth." [33] This is complicated legal terrain at the national level, and more so at the global level. It crosses both national boundaries and the boundaries between the digital and the literal. It illustrates the need for engaged dialogue among the various stakeholders ranging from engaged literal and digital stakeholders to lawmakers and judicial systems, a dialogue that must precede any rush to legislation and regulations, both at the national and the global (international/multilateral?) level.
Digital Residency and the Rights and Obligations of Digital Citizenship
Article 15: (1) Everyone has the right to a nationality.
The advent of digital technologies has created an important new reality, the scope for a digital residence in the cyberspaces of the Internet ecosystem. Consider digital residency and how the issues raised relate back to the UDHR. For starters, digital technologies are a double-edged sword. [34] Practically every trace of one's presence in cyberspace is uniquely tagged to one's literal persona. One's virtual identity and the AI assisted multiple digital personas constructed by others facilitate one's integration into new virtual and literal contexts in ways that one may not approve or wish for. They provide unique identifiers to others well beyond our contemporary notions of privacy and personal security. [35] Nationality, as a citizen or resident of a state, is an important foundational concept of the UDHR [36]. It defines the legal relationship of a person to the state, giving the state jurisdiction over the person. In turn, the person enjoys rights and duties protection from the state. The protection of rights and duties and the honoring of those by both the citizen/resident and the state within the realm of in one's digital residency in cyberspace is an area calling for multistakeholder dialogue to explore the issues and multistakeholder engagement in policy development.
Article 15: (2) No one shall be arbitrarily deprived of his nationality nor denied the right to change his nationality.
The world is again in the middle of a great migration. From 1850 to 1950 one hundred million people migrated, mainly from Europe to colonial areas and areas of sparse population. [37] We are on the cusp of another great migration. Social unrest and climate change have resulted in 70 million forcibly displaced persons, many of those with little prospect of "going home" in any meaningful way. [38] Migration results from both push factors and pull factors. While most existing migrants have been "pushed" by political unrest, estimates suggest that hundreds of millions more will be "pushed" by climate change in the next 20-30 years. [39] Article 15 was crafted after the terrible events of WWII. Refugees fleeing persecution and economic hardship faced some hostile reception but found welcoming destinations in other parts of the world. Increasingly migrants are "displaced persons" stuck in an indeterminate existence in slums and refugee camps, with nowhere to go. They may retain literal nationality or have become effectively stateless. Deprived of the rights of their prior literal nationality, they have little hope of changing literal residency or nationality.
Is there any scope for ameliorating this situation through the extension of digital citizenship? There is much ongoing work around assigning digital Identification documentation to refugees for the management of immediate services. Many have been displaced with no personal documentation and are effectively stateless persons. Within the refugee process, a digital identity can compensate for lack of proof, but it cannot restore the ability to exercise the rights and obligations one's literal identity and leave open the question of what good is a digital identity in the absence of a literal identity?
This leaves open the question of whether there is any scope for ameliorating these personal situations through the extension and application of digital citizenship. The short answer is that nobody knows. That depends on how the world treats the creation of the rights and duties of digital citizenship, and in the absence of rights of literal residency, this is another area to be explored.
There is a bit more to explore regarding UDHR Articles 15.1 and 15.2 that will be handled in the next article in this CircleID series, along with an exploration of UDHR Article: 16 and Article: 17.
What is clear thus far is that the UNDH can and should serve as the North Star/Southern Cross navigational aid for the construction of both an understanding of how we should approach the rights and duties of digital citizenship, in particular global digital citizenship, and how we need multistakeholder dialogue around how to handle those rights and duties challenges that are unique to the digital cyberspaces of the Internet ecosystem.
We also reiterate our position that there is neither a one-size-fits-all template for developing our understanding and approach to digital citizenship. There is a need for multistakeholder engagement, both to identify the best path forward and to get stakeholder buy-in to both the path taken and the mechanisms chosen.
The path forward cannot be completely regulatory and will require attention to restoring society's social fabric, with possibly different repairs in different settings, and rebuilding the underlying social contract to embrace human activities across both of our digital and literal human realities.
Beyond that, and beyond how these tasks are handled at the national level in individual nation-states, there will need to be some blend of international and multilateral action to move forward. This progress, while enlightened by historical perspective and expertise, will have to come from multistakeholder engagement that has been enabled by the digital cyberspaces of the Internet ecosystem. Trying to circumvent, or short circuit, that route will result in delay and the risk of failure.
In the next section, we explore further issues of digital presence and data ownership in the digital cyberspaces of the Internet ecosystem, and how decisions in those areas impact on how we handle the rights and duties of digital citizenship, in the quest for digital and literal future with promise.
[1] The authors contributed this article solely in their personal capacity, to promote discussion around the UDHR, digital rights and digital citizenship. The authors can be reached at and . The authors would also like to thank Sarah Deutsch for her valuable contributions to the article.
[3] This series of article are presented a bit like preparing the foundation for a house, here the house is the "house of regulations and rights" in the digital age. An understanding of the desired digital rights, and the pitfalls from policy and regulation, is required to build a sturdy and relevant platform of digital rights.
[4] A long list of examples could be given here, ranging from issues of personal privacy and security, to disruptive disintermediation in business, and social process, to the toxic effects of false news on elections, governance and trust.
[5] . See: The Economic Consequences of the Peace, written by the British economist John Maynard Keynes and published in 1919. His call for multilateral policies was ignored after WWI. He was instrumental in the growth of multilateralism after WWII.
[6] Comments are welcomed. Send comments with "UDHR" in the subject line to klausstoll@thebrocasgroup.com . Comments will be used to update this digital rights discussion in subsequent articles.
[7] After World War II the drafters of the UDHR faced a historical situation of immense complexity. British rule of Palestine, confirmed by the League of Nations, took effect in 1923. To escape persecution in Europe, Jewish immigration to Palestine took place in waves, resulting in Palestinian and Arab rioting in 1920 and 1921. The British imposed immigration quotas for Jews. The US Immigration Act of 1924 barred Jewish immigration to the United States. Persecution in Poland and Hungary left those Jewish communities with few migration options. By 1938 several hundred thousand Jews had migrated to Palestine. Between 1939 and 1945 Nazi atrocities caused the deaths of approximately 6 million Jews and at the war's end illegal migration accelerated. The British turned to the UN for help and the United Nations Special Committee on Palestine (UNSCOP) proposed "an independent Arab State" and "an independent Jewish State". The resolution was adopted by the UN in 1947 and followed by inaction. David Ben-Gurion proclaiming Israeli independence in 1948. It is within this context that the drafters of the UDHR drafted Articles 13-15.
[8] For Country, Nation and State: A country is commonly understood as a defined and recognized geographic territory inside which people live according to a legally binding sets of rules that are set by its own governance processes. The term nation often refers to a country, but not always. A nation may exist within or across geographic boundaries. It may be defined as a community of people based on political, economic, geographic, ethnic, religious, and other factors. The important difference between a country and a nation is that a nation may not have sovereignty or governing power, nor clearly delineated geographic boundaries. A state, by contrast, is an entity with governing power and sovereignty over a geographic area.
[9] Tech giants like Amazon, Facebook and Google have behaved in part as though they are their own global digital nations, some (Amazon) even appropriating the names of territories, reflecting their intentions to operate at a global scale and, in the absence of global Internet ecosystem governance, act as digital nations answerable only to themselves.
[11] There is scope for some extraterritorial application here. In the area of child abuse, for example, countries can and do prosecute citizens for sex crimes against children whether they are perpetrated at home or abroad.
[12] For example, when the EU implemented the General Data Protection Regulation (GDPR) in 2018 Facebook moved millions of accounts out of Ireland to remove them from GDPR regulation.
[13] This is a bit like one's rights on the high seas. Some of those rights have been enshrined in multilateral "law of the sea" agreements, some rights are protected by one's national citizenship and the protective efforts of the relevant nation states, and for some rights there is no legal protection at all. For example, in the case of rescue efforts when pirates hijack ships at sea, it is often difficult to determine which rights apply and which do not.
[14] While some might wish for a one world government approach that respects and enforces global digital citizenship, it is unrealistic to believe that nation states would give up sovereignty for the creation of a global digital governance structure. The short run path forward is more likely to be international and multinational.
[16] The area of AI-enhanced facial recognition software is a case in point here. China is using it to monitor human behavior and keep a "social credit" score card on individuals. Multiple commercial entities are compiling scorecard ratings (credit, insurance, health, driving) using digital business practice and data gathering techniques that raise serious legal and human rights (privacy) questions.
[19] There are ongoing issues here. How much privacy should prevail around domain name ownership? Anonymous ownership can hide criminals, predatory behavior and such. Revealed ownership can expose vulnerable groups to retribution by enemies and repressive regimes. Even the current controversy around the sale of the non-profit .org registry by the Internet Society to a private capital fund has raised questions around the protections afforded to social activist .org domain name holders.
[20] For example, Canadian citizens and permanent residents engaging in prohibited child sexual exploitation in a foreign country can be prosecuted in Canada even when they have not been convicted in the foreign country.
[21] It cannot simply mean the right to disconnect when access is increasingly seen as integral to both human and digital rights. It would be like saying that one can escape constraints on literal citizenship by ceasing to breath.
[22] Estonia is offering a digital e-residency. See: https://en.wikipedia.org/wiki/E-Residency_of_Estonia
[26] In both cases this can spill over into being subject to physical violence or engaging in acts of physical violence. The literal and digital worlds are parts of one single larger reality.
[27] Covid19 makes for an interesting and important new example. The speed and the volume misinformation appeared about Covid19 overwhelming. How do we move from an Internet saturated with misinformation and false news to an emphasis on an "information commons" based on evidence, truth and integrity? At one level this has become the Internet's finest hour, using novel approaches to address pandemic circumstances at lightning speed. Digital processes and digital actors (firms, organizations, governments, and individuals) have emerged as crucial to how we combat disease. At another level emerging practices are raising questions about policy, practices and behavior that will have to be addressed once society is no longer of a war footing fighting the covid-19 virus outbreak.
[28] It can be perceived as a threat to the political authority of the state if individuals distribute material containing uncensored information which undermines the credibility of state-controlled news media. See: https://en.wikipedia.org/wiki/Political_crime
[29] Edward Joseph Snowden leaked highly classified information from the National Security Agency (NSA) in 2013, after seeing the Director of National Intelligence, James Clapper, directly lie under oath to Congress by denying that the NSA wittingly collects data on millions of Americans. In May 20, 2013, Snowden left the United states to seek physical asylum and remains abroad. Julian Paul Assange, the Australian who founded WikiLeaks, published a series of leaks provided by U.S. Army intelligence analyst Chelsea Manning. After a series of asylum moves and legal complications Assanage faces legal indictment from the United States and remains incarcerated in Britain's London Belmarsh Prison. Chelsea Elizabeth Manning, the American activist, whistleblower and former US Army soldier, was court-martialed in 2013 for violations of the US Espionage Act and other offenses after disclosing military and diplomatic documents to WikiLeaks, and was sentenced to prison in 2017. In Marsh 2020, a Federal judge orders Chelsea Manning's release from jail.
[31] Greenwald, Glenn; MacAskill, Ewen; Poitras, Laura (June 9, 2013). "Edward Snowden: the whistleblower behind the NSA surveillance revelations". The Guardian. London.
[32] Army General Martin Dempsey, Chairman of the Joint Chiefs of Staff, speaking to the House Armed Services Committee (March 2014).
[33] Whistleblower leaks can be used to illuminate truth, or to influence outcomes. At the start of the 2016 US Presidential campaign Wikileaks released documents pertaining to Democratic Party candidate Hillary Clinton. The U.S. Intelligence Community and a Special Counsel investigation concluded that that the Russian government carried out the hacking to interfere in the 2016 US Presidential elections.
[35] Even the idea of a permanent digital identification is subject to much debate. Benefits are being weighed against a "nanny" or "surveillance" state watching and meddling into every aspect of one's personal affairs. This concern is amplified when it comes to applications like AI enhanced facial recognition software linked to broadly installed camera networks. The current Chinese personal "Social Credit" score, based on mass digital and video surveillance, is an example of such practices.
[36] The UN sees as one of its central roles to enforce the right to nationality, as the right to nationality implies protection of the human rights of every individual to a minimal standard, set down in the UDHR. This is reflected in the large amount of treaties and resolutions and UN agencies work on the topic. See: https://www.ohchr.org/EN/Issues/Pages/Nationality.aspx
[37] Sadly, this was frequently at the expense of native populations seen to have no rights at all, and in some cases viewed as less than human.
Early action now on possible performance issues will "flatten the curve" of customer problems in the coming weeks and months.
Here are three things technology and software vendors can do right now to get ahead of problems that may appear (if they are not already) with services such as development, implementation and support:
Check your contracts to see whether there are any "material assumptions" that have failed or will fail — perhaps because of some governmental action or unavailability of personnel.
Consider whether a change order would be appropriate to redefine the scope of services, timing for performance, or the fees to be charged.
See if any delay in your performance is excused on the basis of force majeure. If so, do you need to give notice to your customer that you are claiming force majeure?
Learn from IBM: Do what is required when there are failures of material assumptions.
In 2006, the State of Indiana signed a $1.3 billion contract with IBM to revamp the technology of the State's welfare system. The economy went south in 2008. In the complicated breach of contract litigation that followed, IBM argued, among other things, that the economic downturn resulted in the failure of one of the material assumptions of the agreement. IBM urged the court to consider that failure of assumption when deciding whether IBM had materially breached its contract to develop and deploy the system.
The Indiana supreme court rejected IBM's arguments. Why? Not because the economic downturn was not a failure of a material assumption. (It might have been.) Indeed, the contract specifically said that one of the parties' material assumptions was that the economy would not take a downturn. But IBM did not do what the contract required in light of the downturn — it did not submit a change order request in response to the failure of the assumption, as the contract required.
Change orders anyway?
Even if your contract does not contain material assumptions, it may contain a procedure for procuring change orders. Parties include change order provisions so that they have an organized pathway for making changes to the scope, timing or pricing when circumstances — whether dramatic or trivial — change while the contract is being performed. Vendors should consider whether a simple change to the parties' obligations can be made now to reduce bigger problems later. It is better for a ship to correct its course early in the journey rather than after many weary days at sea.
And from a practical, customer-focused perspective, the discussions around possible change orders give a vendor the opportunity to communicate with its customers. This gives the vendor the chance to assure the customer that services are safe in the long run, and can work to build trust and goodwill that will be key in the further development and collaboration that is going to happen in the technology space once this COVID-19 episode has come to a close.
Force majeure notice — it is critically important
In the litigation against the state of Indiana, IBM also claimed that severe flooding in the state in 2008 was a force majeure event that excused IBM's performance. Again, as with the argument for the failure of material assumption, IBM did not do what it was required to do under the terms of the contract to avail itself of this excuse in performance.
The court found that force majeure did not apply because IBM did not give appropriate notice as required under the agreement. This highlights a critical takeaway — if a vendor sees an upcoming need to claim that it cannot perform due to some circumstance arising from causes outside its control, it is better to place the customer on notice of that fact sooner rather than later.
So, here are the key questions to ask right now:
Has a material assumption failed? If so, what must I do?
Would a request for change order be appropriate?
What do I need to do before claiming force majeure?
Being proactive now, in the early stages of the COVID-19 crisis, will — just as in the epidemiological context — flatten the curve of problems later.
Despite a recent Ninth Circuit decision denying immunity to malware detection software for targeting competitor's software, court holds that Section 230 protected Malwarebytes from liability for designating software driver program as potentially unwanted program.
Plaintiff provided software that works in real-time in the background of the operating system to optimize processing and locate and install missing and outdated software drivers. Defendant provided malware detection software designed to scan consumer's computers and to report potentially unwanted programs. After defendant's software categorized plaintiff's software as a potentially unwanted program, plaintiff sued, putting forth a number of business torts, including business disparagement, tortious interference, and common law unfair competition.
Defendant moved to dismiss under 47 U.S.C. 230(c)(2)(B), which provides that no provider of an interactive computer service shall be held liable on account of any action taken to enable or make available others the technical means to restrict access to material that the provider deems to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.
The court granted the motion to dismiss, holding that Malwarebytes was immune from suit under Section 230. It differentiated the case from the Ninth Circuit's recent decision in Enigma Software Group USA, LLC v. Malwarebytes, Inc., 946 F.3d 1040 (9th Cir. 2019), in which the court held that Section 230 immunity did not protect Malwarebytes for designating a competitor's anti-malware software as "otherwise objectionable." In this case, the court found that plaintiff's software did not make it a competitor to defendant. Since the parties were not direct competitors, the limitations on Section 230's protection did not apply.
The case can be met with a bit of a sigh of relief to those who, along with Professor Goldman expressed concern that the Enigma case would make it more difficult for anti-malware providers to offer their services. Though Enigma did limit Section 230 protection for these vendors, this decision shows that Section 230 immunity in this space is not dead.
Asurvio LP v. Malwarebytes, Inc., 2020 WL 1478345 (N.D. Cal. March 26, 2020)
Trademark owners in the U.S. have a choice in suing for alleged cybersquatting: either the Uniform Domain Name Dispute Resolution Policy (UDRP) or the Anticybersquatting Consumer Protection Act (ACPA). Of the two, the UDRP is far and away the forum of choice for a very good reason: it is speedy, efficient, and inexpensive. Complaint to award can be concluded in less than 45 days. It is, though, a hermetic procedure in that it applies its own jurisprudence which generally matches U.S. federal law established under the ACPA; but not always and may have something to learn. The tension between the two is expressed in the outcomes of cases challenging UDRP awards. When panelists and judges draw different inferences from the facts and reach different conclusions about liability, it's natural to question which law should be applied; and when the court annuls an award, what the takeaway lessons should be for panelists.
The challenge starts with the jurisdictional scope of the UDRP and the sometimes absent, incomplete, or unprofessional submissions of evidence from respondents that, in some factual circumstances, advantage complainants. Error of reasoning doesn't happen often, but that it happens at all can be disturbing. In its Second Staff Report Paragraph 4.1(c), the Internet Corporation for Assigned Names and Numbers (ICANN) stated that the UDRP was "intended only for the relatively narrow class of cases of abusive registrations." The overwhelming number of cases (in the low 90% range) fall within the "narrow class." These are generally cases of obvious infringement in which facts rather than supposition or conjecture directly implicate respondents' unlawful intentions. However, as facts supporting cybersquatting become less obvious, and there is more dependence on inference, inclusion in the narrow class becomes increasingly attenuated, and sometimes draws in cases that are not in the narrow class.
This is illustrated by a few decisions in which panelists have held respondents liable for cybersquatting that courts have corrected by annulling UDRP awards. Some cases have been decided in trademark owners' favor that should properly have been dismissed as outside the scope of the UDRP. There is a temptation (not always resisted by some panelists, and most certainly unconscious) to give complainants the benefit of the doubt on the issue of bad faith registration. Admittedly, this is a minuscule number compared with the whole, but for losing respondents, there can be a punitive cost for challenging UDRP awards in federal court, even though they prevail. I made an earlier stab at understanding this phenomenon in False Expectations: Attorney's Fees and Statutory Damages in ACPA Actions. If there is any doubt as to the quality of the evidence, a claim should be rejected as outside the "narrow class."
There is a sense in rereading some decisions in challenged UDRP awards that they were made ad hoc; in which panelists reached their conclusions without fully parsing or appreciating the facts. The latest example, Lotto Sports is discussed further below. In the earlier Airfx UDRP case, although the domain name predated the trademark, the Panel nevertheless agreed with Complainant that Respondent "did such with the intent to profit by renting such domain names to the legitimate owners of the trademark." As far as one can tell from the Panel's summary of the facts, there was no evidence Respondent could have known about Complainant's future trademark registration, and Complainant offered no evidence of having any common law right that could have made sense of its case.
Admittedly panelists work under great pressure to get their decisions filed (14 days from the date of full submission), but if a case is a square peg, it shouldn't be forced into a round hole. One of the virtues of these "reversals" (and why panelists and parties should pay attention to them) is they teach us to look at facts more carefully and be more skeptical of parties' assertions. The more a decision-maker relies on inference, the greater the likelihood of stepping into error.
If we had to take a count, the majority of challenges to UDRP awards up to now have resulted in settlements in favor of domain name holders. See earlier essay: Prudential Settlements for Alleged Cybersquatting/Reverse Domain Name Hijacking Under the ACPA. Except for a handful of landmark ACPA cases, prominently AIRFX.com v. AirFX LLC, No. CV 11-01064-PHX-FJM, 2012 WL 3638721, at *6 (D. Ariz. June 6. 2013), Black v. Irving Materials, Inc., 17-CV-06734-LHK (N.D. Cal 2019/2020), and now Dent v. Lotto Sport Italia SpA, CV-17-00651-PHX-DMF (D. Arizona Mar. 10, 2020) it has been rare for actions to go the distance to trial and judgment.
What is emphasized in these cases, and the reasoning that goes into the different conclusions, Panel and Court, seemingly from the same facts, is that the court record has been more thoroughly digested (more so than a Panel can devote). It highlights in a dramatic way that in the competition for domain names, trademark rights are inversely impacted by complainants' choices of language; the more common their terms or the use of those terms (even if not common) by other commercial players, the less protection mark owners can expect to enjoy. That a domain name may be identical to a trademark is not evidence of unlawful registration, although it could be with the right ingredients of facts.
In recent months several other losing domain name holders have launched ACPA challenges: Ganeden Biotech, Inc. and Kerry Luxembourg S.à.r.l v. Rob Monster, D2019-3012 (WIPO February 13, 2020) (<BC30.com>); Lakes Gas Co. v. Domain Administrator, DomainMarlet.com, D2019-0830 (WIPO June 21, 2019) (<lakes gas.com>) earlier in 2019 >; and possibly others. In these, however, the sparseness of the records underlines an important point in both UDRP and ACPA actions: inferences from skimpy records are generally drawn against the party having the burden of proof or production. In Ganeden, the Respondent limited itself to an email that stated that "[t]he Disputed Domain Name is generic, and the Complaint should be denied. The Panel should find Reverse Domain Name Hijacking (RDNH)." This is not, I think, a strategically wise submission! In the federal action plaintiff- domain name holder should possibly rethink his submission strategy.
If there is "error" in a Panel's reasoning, it must be in the ultimate finding of bad faith registration. In the Black case, for instance, discussed in more detail in False Expectations previously noted, the domain name holder failed to respond to the UDRP, thus leaving the Pane with a one-sided record. With a full record in the ACPA action, first, an advisory jury held the plaintiff's registration was lawful, then in a lengthy written decision, the Court explained the legal basis for vacating the UDRP award. Unfortunately, the Court denied plaintiff's (losing UDRP domain name holder) motion for attorney's fees, reinforcing the point about false expectations in reverse domain name hijacking cases. Plaintiff vindicated his rights but at a heavy financial cost (absorbed most likely by his attorney in unpaid time charges).
Two new ACPA court decisions were filed earlier in the month. Both should give panelists and parties something to think about. The Pace case noted earlier was probably received with dismay by the challenging domain name holder, while the result in the other was received with joy, Dent v. Lotto Sports, supra. Of the two, Dent is the more important, but Pace illustrates a point that plays a significant role in the outcome of UDRP cases, namely weight of such evidence as has been submitted and credibility. Defendant (trademark winner in the UDRP) did not appear for the ACPA and Pace moved for default judgment, but the court was having nothing of it: "The context of the case suggests serious questions as to the veracity of [Pace's] legal conclusions, under both the ICANN UDRP and the ACPA." I won't go into the case further at this point; rather, will reserve comment until the next motion and ruling.
In Dent, the court had a full record and briefing both of the facts and the law. It concluded that plaintiff's registrations of <lottostore.com> and <lottoworks.com> were lawful. It arrived at this determination by focusing more carefully on two factors that deserve greater attention in UDRP cases since panelists generally view them through a narrower lens, namely 1) domain name creation date; and 2) website content. Ordinarily, panelists commence their analysis of respondent's intention to cybersquat as of the domain name registration date. This works well with cases in the "narrow class," but less well for cases in which the factual circumstances create uncertainty of intention. Remember, two of the bad faith circumstances include the phrase "primarily for the purpose." It can almost be said as a law, as the uncertainty of intention increases the greater the need to rely on inferences, and this may (without conscious intention) lead to giving complainants the benefit of the doubt.
The Court's decision in Airfx demonstrated this with particular clarity. The UDRP Panel viewed the facts through Complainant's eyes; the Court viewed the facts as an outsider piecing together the circumstances to grant judgment to the domain name holder. The Dent facts are different in that the domain names had been acquired from earlier registrants. The creation date predated the trademark, but plaintiff's registration date postdated the mark. This brought into the fact pattern the good or bad faith of earlier holders, which (as I noted) is not ordinarily considered in a UDRP analysis. Domain names predating corresponding but later acquired trademarks are perfectly lawful. The question is whether successors registering domain names after marks are used in commerce have the traditional rights of property ownership? Domain names have been declared by a number of jurisdictions, including the U.S., to be found to be intangible property.
This issue arose in GOPETS Ltd. v. Hise, Digital Overture, Inc., 657 F.3d 1024 (9th Cir. 2011), cited in Dent. Ordinarily, creation date if not important in UDRP adjudication, but the Dent court showed how it can be, and why in this particular case the Panel missed its significance:
Given that the Court has found that Plaintiff purchased the 1998 and 2011 non-party registrations of the disputed domain names, given that there is no evidence of unlawful use or bad faith regarding such registrations, and given that the Court has found that Plaintiff did not use the disputed domain names, the Court need not address bad faith and safe harbor provisions of the ACPA.
The Ninth Circuit had concluded that "in light of traditional property law," there was "no basis in ACPA to conclude that a right that belongs to an initial registrant of a currently registered domain name is lost when that name is transferred to another owner."
Translated to the UDRP, this means the claim was outside the "narrow class," and the complaint should have been dismissed. Whether Dent will suffer the same result on its motion for attorney's fees as in Black remains to be seen, though the facts are slightly different, which makes the prognosis more comforting in Dent.
The second issue in Dent of interest to parties and panelists concerns the use or non-use, or resolution of the domain name to a registrar created, as opposed to a respondent page. Ordinarily, in UDRP proceedings, populated pages are read against respondents. Respondents should take pains to properly curate their website by avoiding infringing links. However, in Dent
Defendant [prevailing party in the UDRP] cites to no case authority discussing or holding that a domain name owner's utilization of a GoDaddy or a similar noncash parking page constitutes "use" of the domain name in the context of a claim under the ACPA. Further, the evidence indicates that: (1) Plaintiff has not developed a public website using the domains; (2) Plaintiff has not advertised or sold any goods or services using the domains; and (3) under his agreements with GoDaddy, Plaintiff has no authority to modify the content on the parked pages and may only inquire of customer support what further options "might be available.
This suggests panelists should dig deeper before holding respondents liable for content created under "noncash parking page" agreements. This is not to say that direct links to complainant or competitors should not be a factor, but the intention to violate the UDRP must be established positively, not by conjecture or for that reason alone.
Whether panelists should look to court decisions for guidance on future UDRP cases is an interesting question, but the Airfx, Black, and Dent judgments "reversing" UDRP awards demonstrate through their analysis and reasoning what the finder of fact should be looking for. In these cases, the UDRP Panels should have dug deeper. These correctives are a positive development. In the same way that appellate rulings establish the law for district court judges, so judges in their reasoning on cybersquatting claims offer a reading of the law for panelists that deserves their attention. The more clearly the boundary of the "narrow class" is fixed, the stronger the UDRP.
Written by Gerald M. Levine, Intellectual Property, Arbitrator/Mediator at Levine Samuel LLP
This week, the ICANN Board will vote on whether to approve the transfer of control of the .ORG domain to a private equity firm called Ethos Capital. This is a weighty decision for ICANN, since in order to approve the sale, it will have to convince itself that this is the right choice in the face of tremendous and widespread opposition, including from those who will be most affected by the sale. Over 800 nonprofit organizations have publicly objected, along with over 50,000 individual Internet users, representatives of severalgovernments, two United Nations Special Rapporteurs on Freedom of Expression, and respected members of the Internet governance community. Recently, Xavier Becerra, the Attorney General of California — where ICANN is organized as a nonprofit — wrote to ICANN requesting a "no" vote on the change of control.
How should ICANN view Attorney General Becerra's letter? In a post to this site, Jonathan Zuck reminded us that "getting on the bad side of a state attorney general is never a good idea." All the more so when the attorney general is the regulating entity over you, as California's is over ICANN. At the same time, he wrote, ICANN allowing itself to be influenced by the Attorney General "undermines ICANN's independence and sets a bad precedent for the future."
I agree with Jonathan that ongoing government interventions would not be good for ICANN. After all, plenty of governments around the world seek more power to censor the Internet, and co-opting ICANN's authority over the domain name system is a path to growing that power. ICANN's independence from government control is hard-won. That's exactly why ICANN's Board should not take steps that cause regulators to be concerned that it's not taking its public interest charter seriously. We think that ICANN should vote to reject the transfer of the .ORG registry, based not on Attorney General Becerra's request but on the reason that the Attorney General felt compelled to write at all: the tremendous weight of the evidence and opinion ICANN has collected over the past months that this transfer is a bad idea, for the public interest and for the future health of the nonprofits that rely on a strong .ORG. Acting to protect the nonprofit community by rejecting the transfer would demonstrate to the Attorney General that ICANN is fulfilling its chartered purposes. Doing the right thing on the .ORG issue reduces ICANN's vulnerability to government interventions.
There's an undercurrent of opinion within ICANN that the way to maintain independence is to refuse government requests for action under any circumstances. In many cases, especially where the regulator is wrong or acting politically, that's the right approach. But it's the wrong strategy here. According to Gene Takagi, an attorney with NEO Law Group and Contributing Publisher of the Nonprofit Law Blog, "the California Attorney General has regulatory power to ensure that charitable assets are used for the purposes laid out in an organization's charter, and not purely for private benefit." And the .ORG registry agreement, under which ICANN delegates the right to run that registry, is quite a valuable asset.
In extreme circumstances, the Attorney General can involuntarily dissolve a nonprofit corporation that fails to carry out its purposes. But without going to that extreme, the state's regulators could audit ICANN's use of valuable resources, including its registry contracts — not a great scenario for any stakeholders.
No matter the perception, ICANN is not a law unto itself. It is neither a government nor an international organization established by treaty. It is a California nonprofit corporation. ICANN cannot avoid California law enforcement officials acting to make sure that ICANN is using its valuable registry agreements to further its mission, except by proactively and clearly staying true to that mission.
California takes remedial action against charitable trusts and public benefit corporations every year — corporations with far less impact than ICANN. That no such action has been directed to ICANN in the past 22 years is a testament to ICANN's good governance in pursuit of its mission so far. For example, in 2002, ICANN fulfilled a major purpose for which it was founded, by implementing the transfer of the .ORG registry from Verisign to the Internet Society, through the newly created Public Interest Registry. That transfer didn't involve ICANN passively acquiescing to a secretive business deal. It was, instead, the culmination of an open and public bidding process based on who could best protect the interests of nonprofit Internet users, not merely on who showed up with a bunch of cash. ICANN recently re-affirmed the importance of the 2002 criteria.
Following the strong objections of nonprofit Internet users and a marked lack of disclosure by PIR and Ethos, a rubber-stamp approval of the .ORG transfer, or an approval conditioned on token oversight that will be meaningless in practice, would only reinforce the conclusion that ICANN now considers itself unaccountable. It would also demonstrate that ICANN doesn't take its public interest commitments seriously. This is especially true given ICANN's refusal to disclose information to the Empowered Community about the .ORG transfer, shutting down the first invocation to date of that system of accountability. Approving the sale would also affirm the rising chorus of Internet users who fear that ICANN serves the interests of the largest registries and registrars at the expense of Internet users.
The sale of PIR to Ethos Capital risks selling out the interests of .ORG registrants, including millions of the world's most important NGOs. ICANN must stand up for the well-being of NGOs in this time of global crisis by denying the sale of their domain registry to private equity investors. Otherwise, the decision may be taken out of ICANN's hands. That would truly set a precedent that will harm NGOs and all Internet users in the long run.
Written by Mitch Stoltz, Senior Staff Attorney at the Electronic Frontier Foundation
From Ambiguity in Privacy Policies and the Impact of Regulation paper: "To see how a sentence may reflect these categories, the phrase "we generally may share personal information we collect on the Site with certain service providers, some of whom may use the information for their own purposes as necessary" contains a condition, generalization, modal verbs and numeric quantifiers. These vague terms are annotated in the sentence as shown."
As anyone reading this blog, assuredly knows, the world is in the grip of a deadly pandemic. One way to contain it is contact-tracing: finding those who have been near infected people and getting them to self-quarantine. Some experts think that because of how rapidly newly infected individuals themselves become contagious, we need some sort of automated scheme. That is, traditional contact tracing is labor-intensive and time-consuming — a time we don't have. The only solution, they say, is to automate it, probably by using the cell phones we all carry.
Naturally, privacy advocates (and I'm one) are concerned. Others, though, point out that we've been sharing our location with advertisers; why would we not do it to save lives? Part of the answer, I think, is that people know they've been misled, so they're more suspicious now.
As Joel Reidenberg and his colleagues have pointed out, privacy policies are ambiguous, perhaps deliberately so. One policy they analyzed said:
"Depending on how you choose to interact with the Barnes & Noble enterprise, we may collect personal information from you…"
"We may collect personal information and other information about you from business partners, contractors, and other third parties."
"We collect your personal information in an effort to provide you with a superior customer experience and, as necessary, to administer our business."
"May"? Do you collect it or not? "As necessary"? "To administer"? What do those mean?
The same lack of clarity is true of location privacy policies. The New York Times showed that some apps that legitimately need location data are actually selling it, without making that clear:
The Weather Channel app, owned by an IBM subsidiary, told users that sharing their locations would let them get personalized local weather reports. IBM said the subsidiary, the Weather Company, discussed other uses in its privacy policy and in a separate "privacy settings" section of the app. Information on advertising was included there, but a part of the app called "location settings" made no mention of it.
Society is paying the price now. The lack of trust built up by 25 years of opaque web privacy policies is coming home to roost. People are suspicious of what else will be done with their data, however important the initial collection is.
Can this be salvaged? I don't know; trust, once forfeited, is awfully hard to regain. At a minimum, there need to be strong statutory guarantees:
The information collected will only be used for contact tracing;
It will not be available to anyone else, including law enforcement, for any reason whatsoever;
There are both criminal and civil penalties for unauthorized collection or use of such data, e.g., by a store;
There is a private right of action as well as city, state, and Federal enforcement;
That class-action suits to enforce this are permitted, regardless of terms and conditions requiring arbitration.
And this needs to be as iron-clad as a battalion of lawyers can make it.
I don't know if even this will suffice — as I said, it's hard to regain trust. But passing a strong Federal privacy law might make things easier when the next pandemic hits — and from what I've read, that's only a matter of time.
(There's a lot more to be said on this topic, e.g., should a tracking app be voluntary or mandatory? The privacy advocate in me says yes; the little knowledge I have of epidemiology makes me think that very high uptake is necessary to gain the benefits.)
Written by Steven Bellovin, Professor of Computer Science at Columbia University
Articles 15, (revisited)16-17, Freedom of Assembly, Economic and Social Rights. Co-authored by Klaus Stoll, Prof Sam Lanfranco, Sarah Deutsch1
Internet Governance, like all governance, needs to be founded in guiding principles from which all policy making is derived. There are no more fundamental principles to guide our policy making than the Universal Declaration of Human Rights, (UDHR). This article as Part 5 of the series of articles2 (published in installments), and we are revisiting Article 15 to look at empowered Digital Citizenship and Internet Governance, to move on to the rights to a family and property3.
Article 15: (1) Everyone has the right to a nationality.
Article 15: (2) No one shall be arbitrarily deprived of his nationality nor denied the right to change his nationality.
In the previous article of the series, we looked at the Rights and Obligations of digital citizenship. As promised in Part 4, we will now further explore empowered digital citizenship and Internet Governance.
Empowered Digital Citizenship: Internet Governance and a Better Future
This series examines what the UDHR tells us about what could be, and maybe what should be, our rights and obligations in the digital spaces of the Internet ecosystem. Across communities, there is not likely to be one unique path or one unique arrangement of mechanisms. Here, we only propose an aspirational starting point, beginning with engaged digital stakeholders as individuals and as members of communities.
This part of the series is being written at a time when the world is gripped in a global pandemic that is unleashing virus-driven illness and death on a scale not seen in over a century.4 This pandemic is both a health and an economic crisis, and a crisis where we are relying as never before on digital technologies to carry on our personal, business and public lives. Internet access has become a literal matter of life and death. The realities of digital space are impacting on, and affected by, the pandemic as much as are our literal and biological realities. There is a growing understanding that the virtual and the literal are integral parts of both our individual and our collective realities.
This crisis is pitting public health concerns against economic concerns, reflected in debate on how to balance health strategies (testing, isolation, social distance) with the health of the economy (jobs, income, output). It is also raising issues around regulations and policy directives as complements to, or in competition with, social-behavioral norms. These issues always reside just below the surface in Internet citizenship and Internet policy discussions. How do we decide the tradeoff, under uncertainty during this pandemic, when we think about mandating society's protection of common good, balanced against the rights and duties of persons? Let's deal with that for a moment.
"The common good is about how we live together in community. It's about the ethical ideals we strive for together, the benefits and burdens we share, the sacrifices we make for one another. It's about the lessons we learn from one another about how to live a good and decent life."
Harvard political philosopher Michael Sandel5
"We" appears five times in Sandel's definition of the common good. In contrast, the UDHR is highly focused on the individual, on the "Me".6
The definition, defense and exercise of one's rights and duties as a person never occur apart from one's membership within the larger community. While the rights focus on the person, the duties are focused on the community. The relationship between the person and the community has never been more relevant to social policy and social behavior than in today's digital age.
The traditional notion of community is heavily constrained by time and space and shaped by historical experience. The pervasive and global scope of the Internet means that in actuality (literally), one's presence and residence are in multiple communities that operate across time and space. Peoples' online simultaneous residences range from long-standing spaces to extemporaneous "pop-up" communities such as the multitude of such "gatherings" in the current COVID-19 pandemic.
The global and instantaneous venue of the Internet presents an aspirational starting point to engage digital stakeholders in shaping policy and behavior for both individual rights and for obligations on how communities live together.
Individual rights, within Internet governance, do not come from simply imposing a predetermined governance model onto the Internet ecosystem. Nor can a social fabric of acceptable behavior be simply willed into being. They are not like simple decisions, like deciding about driving on which side of the road. Establishing the rights and duties of digital citizenship will likely be a two-stage process.
The first stage will involve identifying and subscribing to a set of basic principles, much like those in the UDHR. The second stage will be the process of legislative and behavioral changes over time, changes that flesh out the rights and duties of one's digital citizenship, both at the national and the global levels. Just as the rights and duties of literal national citizenship have developed and changed over time, digital national citizenship will go through the same process.
It is likely that global digital citizenship will develop in two directions, upward from the refinement of national digital citizenship, and downward from principles and ideas starting with the notion of a global digital citizenship that exists in addition to and partially apart from one's national digital citizenship.
The processes used to define digital citizenship cannot be independent of the governance process used to govern a country and define literal digital citizenship. This suggests that the processes used to define digital citizenship are both constrained by existing governance processes, also have the potential to address some of the contemporary failings of governance processes in democratic countries.
The current Internet ecosystem is a toxic stew of good information and analysis, poisoned by abundant doses of bad information, false news, and lies. It is a bit like the State of medicine in the late 19th century when medications ranged from folk remedies to benign untested remedies, snake oil salesmen and the outright poisonous.
At the national level, one's rights and duties of digital citizenship will come to have the same legal status as one's literal citizenship. The reverse may not, and need not, be the case. Estonia is a country with a strong digital residency regime. A foreigner may acquire national digital e-Residency without being a literal resident of the country. A digital resident may have no literal residency rights.7
The challenge before us now is:
What should be the rights and obligations of a digital resident's digital citizenship?
There is no "off the shelf" prepackaged answer to that question. The rights and obligations of both state-level and global digital citizenship are, and must be, a work in progress, developing top-down in the form of rules and regulations, and developing bottom us as behavioral norms are woven into the social fabric and implicit social contract. The questions to be addressed here include:
How are the rights and duties of global digital citizenship related to those of national (state-level) digital citizenship?
What levels of stakeholder engagement are called for in the policy development process around digital citizenship rights and obligations?
Some will argue that there is a need for something akin to a global cyber state overseeing the development, administration and enforcement of the rights and duties of global digital citizenship. Others will object, arguing that such an approach is impractical and a global cyber state infringes on the sovereignty of the nation-state.8
There is however a middle ground — one with a long history in terms of dealing with issues at the global level. That middle ground between no global governance and an unpalatable global governance is the use of multilateral agreements.9 One possible path forward involves exploring policy processes (mechanisms) and efforts that involve engaged multi-stakeholderism at the bottom, and state-led multilateralism at the top.10
The physical nation-states play an ambiguous role when it comes to protecting digital citizenship rights. They are developing policies related to national digital citizenship, while trying to extend that control into global cyberspace. Such strategies are bound to experience extreme difficulties in the borderless cyberspace of the Internet ecosystem. These issues become even more challenging when such national policies inevitably clash with each other across jurisdictional boundaries.
So long as states fail to recognize the global borderless nature of cyberspace, their efforts to protect their citizens in global cyberspace will always be inadequate. It will take states entering international treaties that regulate the digital relationships between literal states in order to ensure that the rights of their citizens are respected in borderless cyberspace.
At an individual level, empowered digital citizenship should bring the right to access global cyberspace, the right of protection by the State, and the obligation of the State to engage in multilateral efforts to protect its citizens' global digital rights. On the other hand, state-level interference with cyberspace, such as network takedowns, constitute an abridgment of principle-based rights of digital citizenship in both national and global cyberspace.11
Article 15: (2) states that: "No one shall be arbitrarily deprived of his nationality nor denied the right to change his nationality." This confronts us with an interesting conundrum in global cyberspace. While one's national and global digital residency can be protected or abridged by the actions of one's nation-state, and by multilateral agreements, what might it mean to change one's digital nationality? As well, given the fluid definition of nationality, one may well possess multiple digital nationalities. If states arbitrarily abridge digital citizenship rights in cyberspace, what are the citizen's options? One can, of course, exercise engaged participation to try to enshrine and protect digital rights. One can resist when confronted with tactics contrary to the universal principles enshrined in the UDHR, or enshrined in subsequent global digital citizenship covenants.
Does one have a right, or a possibility, to secede? The answer is both a yes and a no. One can secede from a state's jurisdiction by emigration, but one cannot secede from the global cyberspaces of the Internet ecosystem any more than one can secede from earth's gravity. The mere fact of existing now makes one a resident of global cyberspace. One is likely to have residency even prior to birth.
What this means is one's presence is preordained, that one has a duty and an obligation to willfully become an engaged digital citizen in the cyberspaces of the Internet ecosystem from the moment one is capable of measured and deliberate action. This does not mean a childhood engagement in the governance processes, but it does mean a progressive learning and understanding of integrity-based engagement in policy and behavioral norms that make one a responsible, engaged digital citizen of the national and global internet ecosystems.12
Disenchantment, Digital Governance and Engaged Digital Citizenship
One cannot opt-out of cyberspace any more than one can opt-out of gravity. One can, however, be disenchanted with one digital residency, be that one's digital residency where one possesses literal citizenship or one of several virtual residencies in communities. One can be alienated from the governance and socio-economic processes that surround a particular digital residency. How individuals are treated within their digital residency has consequences for their literal lives. It can promote engagement as citizen stakeholders or disengagement as alienated digital citizens.
The three sources of frustration, disappointment and concern are:
underdeveloped governance mechanisms and social norms (embedded in society's social contract and social fabric) that fail to facilitate a safe and secure existence in one's literal and virtual life.
lack of confidence in the integrity of digital business and governance practices as they impact the personal (e.g., privacy and security). Here — confidence goes beyond privacy and security to trust in the Internet not only from an infrastructure perspective, but as a trusted source of information, e-commerce, etc. The recent fraud associated with our current COVID19 crisis is a good example of how the trust in the Internet can be eroded if abuse is not addressed.
the absence of adequate stakeholder dialogue and engagement in policy planning, in implementation and in the capture and use of lessons learned.
The lack of an appropriate governance mechanism, and of an appropriate social contract woven into the social fabric can be explained, if not excused, by the relative newness of the cyberspaces of the Internet ecosystem.
The resulting lack of confidence by stakeholders and the questionable integrity of many digital business and governance practices are clearly issues to be addressed. Progress on both depends on improving stakeholder engagement in planning, implementation, and advances base on lessons learned.
The challenge at hand is clear. It is to move from disenchanted and indifferent residency to engaged citizenship in cyberspace, a move to an engaged residency that sees the rights and obligations of digital citizenship codified at the appropriate levels of governance and acceptable behavior woven into the social fabric.
Here the focus in on rights and obligations at the global level, in those spaces and regions of the global Internet ecosystem that are beyond the reach of individual nation-states, those spaces where residency may be within multiple communities. Progress here will likely require cooperation via multilateral, intergovernmental, and international mechanisms.
The contemporary response to issues of personal privacy and security, and to entanglement with questionable digital business and governance processes is the common refrain:
"Somebody should do something about that."
The refrain presents both problems and suggested approaches.
The "that" referred to in the refrain is seldom clear enough to be the target of focused stakeholder engagement in the policy process. A wider discourse is needed to specify and assign priority to the "that" list for policy development. Is "that" personal data privacy, false news, faulty analysis, or what? How is the solution to be a blend of governance actions and a rebuild of the social fabric and underlying social contract to accommodate new behaviors in the new realities of digital residency in the Internet ecosystem?
The "somebody" is equally problematic; who is that, who should do what? Digital residency in the global Internet ecosystem does not fall under the jurisdiction of existing regimes of sovereign governance. Whatever process is used to enact policy, it must pass through some governance mechanism. That will require a blend of multilateral, intergovernmental, and international venues. This process will have to strike a balance between what needs to be codified and what should be nurtured in a digitally enhanced social fabric and social contract.
The "something" is the most problematic element in the refrain. The governance mechanism must operate through a venue of sovereign participants. The something, as policies, regulations, or whatever, must help define and respect the rights and duties of digital residency and digital citizenship, and contribute to the rebuild of the social fabric and underlying social contract.13 Because these are hard issues, the outcome often reverts to doing nothing. Doing nothing in the face of clear harm is not a sustainable approach and only further erodes consumer trust. Again — COVID19 presents a good case study to show how dependent we all are at this time in history on the Internet to stay connected, to learn, to have access to information, entertainment, e-commerce...etc.
The notion of nationality contained in the UDHR, as ambiguous as it may be, presumes that nationality needs to be respected under the relevant sovereign law. Extending the principles of the UDHR protections to the global digital residents of the Internet is the central task of Internet Governance activity, an activity that needs to be stakeholder grounded at the bottom, and a blend of legislated agreements and social fabric/contract efforts.14
Presence and Association in the Digital Age
While the virtual reality of cyberspace and the literal reality of physical space blend into the seamless larger lived reality, there are points in the UDHR that remind us of major differences between the virtual and the literal.
Article 16, with its focus on marriage and the family, is a good example of this.
Article 16: (1) Men and women of full age, without any limitation due to race, nationality or religion, have the right to marry and to found a family. They are entitled to equal rights as to marriage, during marriage and at its dissolution.
Article 16: (2) Marriage shall be entered into only with the free and full consent of the intending spouses.
Article 16: (3) The family is the natural and fundamental group unit of society and is entitled to protection by society and the State.
What are the analogs to family and marriage in the Internet ecosystem? Also, the disruptions of this digital age remind us that the social fabric is a patchwork quilt, occasionally in need of alterations and repair.
Certain parts of Article 16 link to the core principles of the UDHR and are timeless, while other parts reflect the social norms of the time. At its core is the equitable treatment of the genders regarding the rights of legal union and protection by society and the State.15 One possible relevance is about the possibilities of digital marriage, performed online. This emulates a traditional literal marriage, but what if an authorized officiant is not present, does the marriage have legal standing?16 There are lessons here from the current COVID-19 pandemic. COVID-19 illness can come on suddenly and with life-threatening consequences. Many risk reaching imminent death without a will, and quarantine prohibits witnessing signatures. Governments quickly adjusted to accept remote witnessing, by digital video. The point of this example is that what was only acceptable literally will increasingly be accepted virtually, in the digital venue.
Returning to Article 16 for insights regarding what is/are the "fundamental group unit[s] of digital society" and their entitlements "to protection by society and the state," the UDHR focus is on the family. One does not necessarily have to focus on "The family." Going back to UDHR Article 11 and the freedom of assembly and association, various digital groupings, above and beyond various forms of digital marriage, can be considered as entitled to such protection.
Digital personas and groupings of digital personas (digital nations) also need protection in cyberspace for whatever purposes those relationships are formed so long as they are within the limits of the law. This is a pressing and difficult area since social media site owners are making independent, non-transparent and unaccountable decisions as to which individual and groups can have residency in their regions of cyberspace, and what digital personas are to be constructed or allowed to exist, based in part on the purposes and intent of the groups and to large degree on the business interests of the social media site provider.17
This is an area where reflection and dialogue are called for, in order to fashion regulations acceptable business practices, to reach a consensus on applicable social norms, and to establish dispute resolution mechanisms.
Property, Ownership and Behavior in the Internet Ecosystem
The advent of the digital spaces on the Internet ecosystem has prompted a massive creation of digital properties and a massive "land grab." This is promoting the need for deeper reflection on society's notions of property, property ownership and property use.
Regarding property (in the literal world), Article 17 of the UDHR is very clear.
Article 17: (1) Everyone has the right to own property alone as well as in association with others.
Article 17: (2) No one shall be arbitrarily deprived of his property.
Whatever is defined as property, one has the right to ownership and the right not to be arbitrarily deprived of one's property. Of course, ownership can be subject to a multitude of restrictions, covenants and entitlements, as in the case of land where there are zoning regulations, and riparian water rights.18
What property is less clear in the Internet ecosystem? What are the "properties" of Internet properties? What are the rights and obligations of ownership for such properties? The technical infrastructure of cyberspace, the machines, cables and satellites, the buildings, etc. are clearly properties in the traditional sense. The digital services they provide are a more complicated area. Many of the services are only enabled by legal and regulatory relationships with governments. Technically and legally, even Internet domain names are not owned properties. Their use is via a contract with an Internet registrar that has a contract with a registry that has, in turn, a contract with ICANN.19
Privately-owned undersea cables need landing rights at a country's shoreline. Satellites and wireless terrestrial systems need legal access to limited radio-frequency bandwidth. Government policies on competition and monopolies may dictate shared access by competitors, and the terms of that shared access. There are disagreements about regulations and terms of access, based on the extent to which user access should be treated as a public good, and providers should be treated as regulated utilities or be left to face unfettered market forces.20
Much of the value of a presence in the Internet ecosystem comes from the rights of ownership and/or access to digital assets and processes. While intangible in a literal sense, they are real in a substantive sense, in terms of the impact of the realities of life, community, commerce and governance. Included in this basket of intangibles are domain names, intellectual property, digital processes, and the ever-increasing and important assembly, storage, and use of archived data. While intellectual property is included in the UDHR's definition of property, it is important that these are not stand-alone rights but must be balanced with other human rights. For example, if an internet intermediary receives a demand letter from an IP owner today to cut off a person's internet access based on an allegation of IP infringement, the property rights in Article 17(1) must be balanced with the UDHR's other rights. That termination demand must be balanced with Article 11 (1), which states that,
"Everyone charged with a penal offense has the right to be presumed innocent until proved guilty according to law in a public trial at which he has had all the guarantees necessary for his defense."
As a practical reality that is clear from surviving during the time of a global pandemic, terminating Internet access would mean possibly disconnecting an individual, an entire family, or even a broader community from access to the Internet. The Internet is the only way digital citizens have to access goods and services, participate in their education, and for some, the only way to communicate with their loved ones. Termination of Internet access based on the property right clashes with UDHR Article 12, which states no one should be subject to arbitrary interference with his family, home, or correspondence. Article 10 confirms that citizens are entitled in full equality to a fair and public hearing by an independent and impartial tribunal, in the determination of his rights and obligations. Article 9 states that no one should be subject to arbitrary exile, and in this case, the blunt remedy of terminating one's internet access based on third party allegations of property rights disputes would result in a form of digital exile. Article 26 recognizes everyone's rights to education, and Article 27 recognizes the right to participate in cultural events, both of which, in today's pandemic, are only available through online access.
How the rights and duties of digital citizenship are defined and respected depends on how each of these intangibles is understood and handled, both via legislation and regulation, and via community norms woven into society's social fabric and its underlying social contract. As illustrated by the examples above, the need to get this balance right is essential.
Regarding the scope of property rights, countries may even differ on what constitutes "property." Japanese law states that "Data is intangible and not subject to ownership under the Civil Code."21 Japan differentiates between personal and non-personal data, as well as intellectual property rights in digital cyberspace. However, the law does not address the fact that so-called non-personal data, collected from non-transactional behavior (e.g., browsing) and from ambient sources (digital apps, Internet of Things, facial recognition, etc.), is nevertheless tagged to individuals. Such data is used to construct individual digital personas for a myriad of economic, political, and other purposes. It is constructed with neither the awareness nor the consent of the targeted person.
The landscape of the Internet ecosystem is rich with properties, and potential properties, that have considerable commercial value. The notion of intellectual property — whether trademarks, trade names, materials subject to copyright protection or patent protection, are virtual assets that have tangible value as digital commercial properties. Much of that tangible value depending on the data that flows through the Internet's infrastructure, the digital applications used to process it, and the ultimate uses to which it is put.
This poses a host of issues regarding what are the parameters of those properties. What are the rights and obligations of those property owners/holders? What are acceptable processes within digital cyberspace? What are the rights and obligations of those whose data is the raw material that feeds those processes and gives value to those properties and processes? What rights does the individual have to those personas constructed to assess one's personal, commercial and political behavior and tendencies, and what rights to the uses to which they are put?
What is clear here is that there is a rich agenda of work to be carried out with respect to understanding the notions of property within the Internet ecosystem. That understanding is integral to building responsible and effective Internet governance, as well as weaving norms of acceptable behavior into society's social fabric.
These understandings and agreements around them are integral to drafting appropriate regulations for the rights and duties of one's digital citizenship in the Internet ecosystem. Stakeholder engagement is essential in governance, and it is integral to society's efforts to rebuild a disrupted social fabric and underlying social contact to provide a guidance matrix for acceptable behavior within the Internet ecosystem.
There is a need for both a governance structure that produces effective Internet governance and a stakeholder lead rebuild of the social fabric in order to identify and protect the rights and duties of engaged digital citizenship.
Reflecting on the UDHR, it is important to remember that while data is virtual, it becomes as real as a rock in our hand when it impacts our individual and collective realities. Intangible data becomes tangible and real through the effect it has on our literal world. The digital and literal realms may be thought of as "separate but inseparable." Together they constitute the reality in which we now live.22
Data Ownership and related IP property issues: My Car is Spying on Me
Nothing becomes property until it has value in use. Medieval English village common lands, and tribal lands globally, were communal with common access. They became "enclosed" (privatized), when exclusion and entitled access brought benefits of some, while diminishing the rights of others. The ownership and value of data are experiencing a similar enclosure movement as the result of three factors that are coming to fruition at the same time.
The first two factors are the quantum leaps in the capacity to store and process data. The third is the rapid growth of fifth-generation wireless communications technology (5G) for digital cellular networks.
Together they expand the ability of the broadly defined Internet of Things (IoT) to generate and share data in real-time.23 For example, autonomous vehicles share data at speeds necessary to support navigation in real-time. Such data is available in real-time, and in archived format, for other uses. Vehicular and cell phone data story plot lines are already standard fare for tracking "persons of interest" in popular television, movies, and online video, and have been used by countries like South Korea to track human movement for dealing with contact and isolation during the COVID-19 pandemic.
Archived vehicular date is time, space and process specific and already being used by manufacturers, insurance companies and others.24 Who has what rights to what, and on what conditions, in terms of the data I produce, and the data capture about me by ambient devices? My car, my cell phone, and my heart monitor, all share data on me with others elsewhere in the Internet ecosystem. Sharing it to what ends? My car is spying on me, I am not sure to what uses, and I probably don't like that.25 Even data that is said to be anonymized, can readily be used to re-identify an individual with just a few added data elements.
This example demonstrates that ownership of an IoT "thing" (here, the car) means that owning the thing does not establish either data ownership or data control.26 The properties of digital properties, the ownership of digital properties, and the rights and duties of the producers and controllers of digital properties are all areas of ongoing policy development. It is crucial that those policy processes involve stakeholder participation by engaged digital citizens, while at the same time, society defines the fundamental notions of the rights and duties of digital citizenship.
We share the same common lands today as the medieval English village if we acknowledge those common interests. The global pandemic has led some to acknowledge that intellectual property rights might lead to societal roadblocks to finding urgent cures and treatments. Some technology companies recently took the lead in announcing they will remove IP property roadblocks and grant free temporary licenses to their patented and copyrighted technologies to allow others to pursue treatments and cures without fear of IP legal ramifications. The Open COVID Pledge is an innovative example of how the global commons can be invoked to protect the broader public good when it is needed most.27
Data Markets and Data Owner/Controller Data Rights
There are massive markets for data, markets for specific data subsets and markets for so-called big data. The uses of such data can be for honorable uses such as epidemiological health studies, for exploitive commercial or political ends, or for nefarious cybercrime purposes. Even when uses are prohibited, such as when law enforcement agencies are not allowed to use applications that track cell phone use, agencies can and do turn to third party entities that scrape, purchase and sell carrier cell tower data.28
Different regions of the globe are at different stages in thinking about legislation and regulations to deal with intellectual property, data ownership, and data privacy. Europe has enacted The General Data Protection Regulation (GDPR). Passed by the European Union (EU), it has extraterritoriality elements and imposes obligations on organizations anywhere, so long as they target or collect data related to people in the EU. The US Congress has held hearings on data privacy and ownership issues, but there has been little movement in terms of legislation. Last year one, then-Democratic presidential candidate, Andrew Yang, included "data as a property right" a central policy platform of his campaign29. The proposal was scarcely developed. Both the EU and US are considering wholesale overhauls to intellectual property laws and safe harbors from liability under the US Digital Millennial Copyright Act and the EU's E-Commerce Directive. Whether new intellectual property rights will result in new liabilities and negatively impact citizens' and right to access the Internet remains in play.
We are early in the necessary discussions around the rights and terms of access to data, how those rights and terms are reconciled with regard to the rights and obligations of digital citizenship, and how those rights are protected and those duties are carried out. Personal data as a tradeable good, or its uses by the data controller, risk the establishment of a form of digital slavery, where one's digital personas were in the service of other. That would constitute not just an assault on one's digital rights but, in a world where the digital and the literal are "separate but inseparable," that would constitute an assault on one's fundamental human rights under the UDHR.
We have focused on the rights side of one's digital presence, one's ownership of digital properties. In this article, we opened the door a bit on digital obligations, but little has been said about the obligations that come with digital property ownership. That side of the coin will be dealt with when we get to the UDHR's Article 29:2, which states:
"In the exercise of his rights and freedoms, everyone shall be subject only to such limitations as are determined by law solely for the purpose of securing due recognition and respect for the rights and freedoms of others and of meeting the just requirements of morality, public order and the general welfare in a democratic society."
The existing scope and scale of low integrity and predatory digital business practices operating in the cyberspaces of the Internet ecosystem are neither desirable nor sustainable. Many of them violate the notions of decency and trust that were integral to the social fabric and underlying social contract that we built for our literal world. The challenges here offer an opportunity to both push for integrity in digital business practices and to cultivate engaged stakeholder learning and participation in those efforts. The newness of the digital context and the long history of defending universal human rights offers an opportunity for engaged stakeholder participation in shaping the digital reality within the principles that guided the UDHR.
Our journey thus far through the UDHR has shown us the need for a governance process with its foundations in an empowered digital citizenship. The underlying principles of internet governance, at every level, can largely draw on the principles on which the UDHR is based.
As we continue and complete our journey through the rest of the Articles of the UHDR, we will more deeply understand how those principles can serve as the building blocks for the rights and duties of digital residency and citizenship in the cyberspaces of the Internet ecosystem. In the next piece in this series, we turn to UDHR Article 18, which addresses one's right to freedom of thought and religion or belief and explore what it means for the rights and duties of digital residence and citizenship.
A CircleID community dialogue series to assess challenges and implications of the coronavirus (COVID-19) pandemic on the Internet.
The COVID-19 pandemic has led to the rapid migration of the world's workforce and consumer services to virtual spaces, has amplified the Internet governance and policy issues including infrastructure, access, exponential instances of fraud and abuse, global cooperation and data privacy, to name but a few. The need for practical, scalable and efficient solutions has risen dramatically.
This was the context in which CircleID hosted its first community dialog via virtual conferencing, which took place on May 7. The topic was "COVID 19 and the Internet" as this is certainly top of mind in the CircleID Community.
I had the privilege of moderating this event. It included a diverse line up of industry leaders who shared how they are responding to the crisis, how their perspective on their work may have changed, and where they think they are headed in the coming months and years. The conversation was broad-ranging as panelists shared what they were thinking in real-time. The goal was to frame questions during this transformational period for Internet infrastructure and Internet-based commerce. The answers are still unfolding.
The line-up of panelists included (in speaking order):
Head of European Policy at Cloudflare, Caroline Greer
ICANN CEO Goran Marby
Co-Chair of the Data, Privacy & Cybersecurity Practice at Greenberg Traurig, Gretchen A. Ramos
Presidential Scholar and Professor of Law at the University of Utah, George Contreras
ArkiTechs Inc. CEO Stephen
Mr. Marby explained ICANN's recent initiative to identify suspect "covid" domain name registrations. (See here.) This initiative drew the most questions from the viewers as this type of preemptive program is not normally offered by ICANN. Time will tell whether this program is successful and what impact it will have on ongoing concerns about phishing, malware, spam, botnets and acts of fraud that are pervasive in the DNS. Marby noted that, as of the panel date, the 80,000 domain names reviewed, 7,000, were identified as potentially malicious.
Ms. Greer and Mr. Lee discussed how their companies are scaling up to meet the demands of their clients. Cloudflare is adding more staff and providing enhanced services. As expected, policymakers in Brussels are focused on COVID-19 response efforts and weathering the crisis. Mr. Lee pointed out that in less-developed regions, like the Caribbean, issues around sustainability are key. As demand for Internet access and bandwidth increases, the need for costly infrastructure increases. The challenge is to manage this increased pressure on internet resources in a time when normally strained economies are considerably more vulnerable.
Ms. Ramos observed how the transition to virtual operations has amplified questions around privacy and data security and that norms are changing. The question is, how? The world is waiting to see.
Professor Contreras describedThe Open Source Pledge, a cooperative effort to share patents and copyrighted content in the fight against COVID-19. He is part of an international group of lawyers and academics that created the pledge and a model, open-source license to facilitate cooperation and information exchange. Notable signatories include Amazon, Facebook, Hewlett Packard, IBM, Microsoft and the NASA Jet Propulsion Laboratory at CalTEch. The world awaits the outcomes of this endeavor. We plan to check in to see the progress of this initiative.
All the participants stressed that, while swift action was imperative, much remains to be learned as to how the responses have worked, what more is needed, and how the changes implemented today will affect operations in a post-COVID-19 world. It was also noted that despite urgency and uncertainty, there is a lot of positive innovation and energy that has emerged from the response within the Internet community. The Internet was designed to be resilient and scalable. A truer test of its capacity and adaptability has never been seen as it did in the last few months.
CircleID encourages you to watch and share your thoughts. They are especially interested in hearing what issues you think are ripe for more in-depth discussion. They welcome suggestions on topics, questions and potential speakers. Stay tuned…
Written by Lori Schulman, Senior Director, Internet Policy at INTA
Co-authored by Klaus Stoll and Professor Sam Lanfranco.
Surveillance capitalism monetizes private data that it collects without consent of the individuals concerned, data to analyze and sell to advertisers and opinion-makers. There was always an intricate relationship between governments and surveillance capitalists. Governments have the duty to protect their citizens from the excesses of surveillance capitalism. On the other hand, governments use that data, and surveillance capitalism's services and techniques.
Donald Trump just outed himself as Surveillance Capitalist in Chief. Social media, as we know it only exists because it is one of the main sources of data, revenue, and profits for surveillance capitalism. It is also Donald Trump's much beloved and used bully pulpit that allows him to reach 80.5 million people in an instant.
Why is Trump attacking it by alleging that Twitter was stifling his freedom of speech? Why has he followed that by signing a likely legally unenforceable executive order that empowers federal regulators to crack down on social media companies that allegedly censor political speech or exhibit political bias?
The inconvenient truth is that surveillance capitalism is incompatible with the truth. Whilst pretending to serve millions, social platform business practices have been created not with the interest of users, but as ever more effective private data harvesters in the service of a commercial and political elite. In truth, they care little about the truth of what users say or receive. They care about their return on investment.
Section 230 of the U.S. Communications Decency Act protects social media companies from liability for the content that users post on their platforms, unlike other media who are held accountable for their content. This does not exempt social media companies from all responsibility for the veracity of content. Everybody on the Internet, be they private individuals, corporate companies, or President of the United States, have rights and responsibilities.
Trump has the right to free speech, and he is also responsible for what he says, its veracity and that it does not harm others. If he is unable to express his opinions responsibly, it falls to those whose platforms he uses to act responsible and flag his content with the intent to prevent harm from falsehoods. This is part of the give and take within the freedom of speech.
Such an intervention does not limit Trump's free speech. His opinion is still fully visible and unredacted. When its veracity is questionable or false, the platform to flag that promotes user due diligence, a wider exercise of responsible free speech, and a generally more knowledgeable public dialogue.
Trump's response to Twitter's actions is to clothe his unfettered lack of veracity in the wrappings of free speech. The irony of Trump's Executive Order is that Twitter could become required to remove such postings of questionable veracity, rather than just flag them for due diligence.
Twitter's response is a "violation" of the first principle of surveillance capitalism: Separate what is morally and ethically inseparable. Separate rights from responsibilities. Separate data ownership from privacy. Separate falsehood from consequences, all in the name of surveillance capitalism's profits.
Trump needs a social media bully pulpit that frees him from any concerns about anybody or anything except himself and his interests, to win the next election.
While Twitter pursues baby steps by flagging Trump's postings, social media must choose which path to follow. The whole Internet ecosystem must choose which path to follow. How do we protect the rights and responsibilities of free speech, promote the veracity of content, and protect user privacy?
Social media have become dominant players in this area of the Internet and have a major role to play. How does society balance the private interests of surveillance capitalism and a public good that includes free and responsible speech, veracity of content and user privacy? Surveillance capitalism, with its exploitive business model and associated use by allied political actors, will opt for their responsibility-free privileges and unbridled profits. Others, in defense of the public interest and the integrity of the individual, will fight for the Internet as free and unbiased Network of Networks dedicated to serving the common good.
The fight over the path forward will be long, costly, and turbulent. Those who demand truth and integrity in social media, endanger surveillance capitalism's business model with its storehouses of data, money, and power. Drawing on Trump's unfortunate Tweet, "when the looting starts, the shooting starts," on the Internet, that looting started two decades ago with social media warehousing and exploiting private date. Hopefully, with rightful and responsible free speech, veracity and engaged citizenship, we can get beyond the data looting and restore dignity to the role of the Internet as a Network of Networks operating in the public good, without having ended up where "the shooting starts."
There is an ongoing disagreement among various members and groups in the ICANN community regarding automation — namely, whether and to what extent automation can be used to disclose registrant data in response to legitimate data disclosure requests. A major contributing factor to the complications around automation has been confusion about how to interpret and apply Article 22 of the GDPR. In its opening paragraph, Article 22 dictates that: "The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her."
Indeed, the GNSO Expedited Policy Development Process Team on the Temporary Specification for gTLD Registration Data Phase 2 (EPDP) has been bogged down by this very question. Several prominent voices, such as those from the Registries Stakeholder Group and Registrar Stakeholder Group (RySG and RrSG, respectively) and legal feedback from Bird & Bird LLP, have advocated a "conservative" approach that assumes such automation would be regulated — and thus not permitted — by Article 22. But a newly-published article by Mike Hintze, partner and privacy law specialist with the firm Hintze Law, opines that Article 22 does not apply to the automated processing contemplated for the System for Standardized Access/Disclosure (SSAD). Based on a comprehensive analysis of the language, policy, and provided explanations of the GDPR itself, this article concludes that "Article 22 should rarely if ever apply to automated decisions to disclose personal data to a third party."
First, the article breaks down the above-quoted language of Article 22 to demonstrate that it would not just be inappropriate, but flat-out incorrect, to apply Article 22 regulations to automated processing and disclosure of registrant data in response to legitimate disclosure requests. Specifically, the article highlights Article 22's certainty requirement — that the automatic disclosure "produces legal effects concerning him or her or similarly significantly affects him or her" (emphasis added). Note this language does not say "may produce," or "is likely to produce," suggesting there must be a degree of certainty that the automatic decision inherently will affect the data subject in a legal or similarly significant way. If such certainty were not inherent, how would the data producer be able to make such a determination in advance? As applied to the automation issue at hand, it is not certain that the disclosure of personal data will inherently result in a legal or similarly significant effect — the third party receiving the data may choose to do nothing with the information.
If the third party does, however, choose to take action against the data subject that has legal or similarly significant effect, the automatic disclosure is at best an indirect cause of such effect. While the GDPR does not explicitly address the direct vs. indirect causation question, Hintze notes that the GDPR's text and practical examples demonstrate direct causation scenarios only as those falling under Article 22. If the GDPR drafters wanted to include under Article 22 automated decisions that could have an indirect legal or similarly significant effect, the drafters could have included such language. To assume indirect, "butterfly effect" type causation was intended to be included by the drafters is to believe the drafters intentionally created an impossible-to-escape maze where all avenues lead back to Article 22 — any automated decision could indirectly result in a legal or similarly significant effect if one were to follow the chain of causation far enough.
Furthermore, the article quells potential concerns that automated decision-making for the disclosure of personal data would be left entirely unregulated if not under the purview of Article 22. To the contrary, the article points out, the GDPR contains other provisions that are more suitable for protecting data subjects' rights with respect to automated processing. For example, Article 6 requires a lawful basis for automated decision-making and for processing the disclosure of personal data to a third party. Advocates for automated responses to disclosure requests universally understand and support that there must be an underlying lawful basis, such as requests from law enforcement and in response to Uniform Domain Name Dispute Resolution Policy/Uniform Rapid Suspension System (UDRP/URS) providers for registrant verification in an active UDRP/URS proceeding. Additional examples that have been proposed as appropriate for automation include well-founded allegations of intellectual property infringement, phishing, fraud, and other similar matters of consumer protection, all of which comply with Article 6's requirement for a legal basis.
This article contributes an important, fresh, and well-grounded perspective to the discussion around automation and the GDPR. While Registries and Registrars may justify their position as ensuring they abide by the privacy rules set out by the GDPR, the well-reasoned analysis put forth by Hintze shows that there is no need to treat the automated processing contemplated for the SSAD as if it were regulated by Article 22. To do so is unnecessary and alarmist, and results in a misplaced and overbroad application of GDPR regulations to the detriment of the smooth functioning of the DNS system.
Free from the restrictions imposed by Article 22, it is important that the EPDP fully embraces and includes automation in its Final Report. Large-scale enforcement efforts by law enforcement, cybersecurity, and brand owners require automation to achieve meaningful results and improve online safety for end-users. Without access to automation, as currently stands, such enforcement efforts are routinely hampered by impractical and tedious reveal requests, which are all too often ignored, and thus require subsequent legal action such as filing a UDRP or lawsuit just to obtain registrant information. This convoluted process is inefficient, excessively time-consuming, and ultimately enables bad actors to continue carrying out DNS abuse in the absence of essential, effective tools for enforcement efforts. The health of the ICANN community and overall DNS system will thus benefit immensely from allowing automated processing and disclosure of registrant data in response to legitimate disclosure requests. As Hintze's article concludes, this can be done without fear of violating Article 22 of the GDPR as it does not apply to the automation contemplated by the EPDP.
Having trademarks (registered or unregistered) is the prerequisite for maintaining a UDRP, but having one is not conclusive of either rights or legitimate interests or registration and use in bad faith. The cautionary tale in many of these cases, especially for the Complainant who has the burden of proof, is that it has to satisfy each of the elements in the three subsections of ¶4(a) of the Policy. For the avoidance of doubt, this means that contentions are fine as long as there's also evidence, preferably concrete, but circumstantial will do if the proffer is sufficiently high in quality to draw an inference of abusive registration. The Panel in Phumelela Gaming & Leisure Limited v. Danny Sullivan, D2020-0245 (WIPO June 19, 2020) is fulsome in giving advice to both parties. For Complainant, surmise is never sufficient to make a case. It asserted that "Respondent's purpose was to confuse members of the public who know the SOCCER 10 brand, or to interfere with the business of the Complainant" but the three-member Panel pointed out that this assertion "rest[s] on the assumption that Respondent was aware of Complainant's business and deliberately targeted that business. [But] Complainant offers no tangible evidence on these points."
The Respondent had its own problem, and raises an entirely different issue, in that it responded in an email without certifying to the truth of the statements he was making, even failing to respond to a Procedural Order that request he do so. For this reason, as it must, the Panel set the response aside, which it had warned could have been fatal. Nevertheless, the Panel still found the Complainant had failed to prove bad faith. In doing so, it makes the following critical points: 1) "Complainant bears the onus of establishing its case on a balance of probabilities," and 2) the Panel "does not have additional powers to exercise discretion or apply a lesser test if a complainant fails to discharge that onus." So, Respondent wins not because it rebutted Complainant's contentions, but because Complainant failed to prove its case.
It is useful to take a quick look at some other recent cases because they are enlightening about both procedure and substance of UDRP jurisprudence. What the Panel has to tell us in Phumelela Gaming is the consensus view. This is also true of the next two cases involving the first element in the UDRP, whether the challenged domain name is "identical or confusingly similar to a mark in which the complainant has rights."
Proving a "Right"
The UDRP forum is available to all owners who believe their trademarks or service marks are being infringed by domain names identical or confusingly similar to marks in which they allegedly have rights. The "have rights" element, the first of three legs in proving bad faith registration and bad faith use, is a critical factor. It's not sufficient that the challenged domain name corresponds to the mark if the mark owner has no actionable rights. So, for instance, a mark owner may allege a common law right, but that is only a right if there's proof that it was distinctive before the registration of the domain name. Another such "right" arises where the mark is allegedly present in commerce but is registered on the supplemental register. The complaint must be dismissed. But there are also circumstances where there are rights but no proof of the elements of the other two legs. With these thoughts in mind:
In Nicolas Karl Reep v. Ali Bazzi, FA2004001891242 (Forum May 19, 2020) ( the Panel holds that "[r]egistration with the USPTO is sufficient to demonstrate rights in a mark under Policy ¶ 4(a)(i) but not if the trademark is on the Supplemental Register. If all that a complainant has is a registration on the Supplemental Register then that Complainant does not have standing to file a Complaint under the Policy since by definition it will not have acquired distinctiveness. That proposition has been clearly established for many years." It is possible, although Complainant in this case evidently did not pursue the issue, to have established (assuming supporting evidence) that notwithstanding its continued registration on the Supplemental Register it had had nevertheless achieved common law rights, that the mark had "acquired" distinctiveness.
In contrast, it is sometimes necessary to establish common law rights if, as in Bitrise Limited v. Dalton Kline, Bitrise Trade Ltd., D2020-0835 (WIPO May 29, 2020) the domain name, here , was registered after Complainant applied for trademark registration but before registration issues. The Panel makes two salient points critical to understanding the law in this area. First, common law rights: "The Complainant asserts that it has built substantial goodwill in the "Bitrise" name since 2015 and points to its client base, press coverage, substantial fundraising, and number of monthly average users in support of this contention." Second, the issue of a respondent's actual knowledge of Complainant or its mark: "In some circumstances, this might have suggested that the Respondent may not have had knowledge of the Complainant or its rights at the time when it registered the disputed domain name and that it had coincidentally combined the ordinary English words 'bit' and 'rise' independently of the Complainant's rights. However, the Panel views this as unlikely on the facts of the present case. First, without evidence to the contrary, these words appear to be a relatively unusual and distinctive combination. Secondly, the Complainant's BITRISE mark had attracted substantial notoriety in the field of online platforms over several years prior to the registration of the disputed domain name, as demonstrated by the above-described press coverage produced by the Complainant."
Written by Gerald M. Levine, Intellectual Property, Arbitrator/Mediator at Levine Samuel LLP