24 November 2010

Section 230

Glenn Reit is an Upper East Side dentist who recently sued Yelp and lost. The trouble began in May 2009, when “Michael S.” posted a negative review of Reit’s practice on its Yelp page, which until then contained about ten uniformly positive reviews. To Michael S., Reit’s office was “small,” “old,” and “smelly,” his equipment “old and dirty.” Calls for new consultations dropped markedly that month.

Yelp wasn’t cooperative when Reit called for help. Not only did they refuse to remove Michael S.‘s allegedly defamatory post, but, according to Dr. Reit, they proceeded to delete all his positive reviews, in an effort to coerce him into buying advertising on the site. If Reit had paid for ads, the complaint alleges, he would have gained some control over his reviews. In March, Reit brought a defamation suit against both Michael S. and Yelp, and he further claimed that Yelp violated New York’s deceptive trade practices statute. In September, however, a New York state court granted Yelp’s motion to dismiss Reit’s claims against them, leaving Reit only the ability to pursue his defamation claim against Michael S. directly. If he chooses this route, Reit will face an uphill battle: Even finding out Michael S.‘s real name would require some showing of the case’s merits and perhaps a First Amendment analysis. And since Michael S.’ s pockets probably aren’t quite as deep as Yelp’s, that battle probably isn’t worth the cost. 


This isn’t the first time Yelp has been accused of this sort of pay-to-play practice. In early 2009, several newspapers rounded up allegations that Yelp made sales pitches to rearrange and remove reviews in exchange for ad money. And in February of this year, Cats and Dogs Animal Hospital in Long Beach headed up a class action suit against the website under California’s somewhat amorphous unfair competition law.

Throughout all this, Yelp has denied any shady dealings. They’ve insisted that the only review-related benefit that advertisers received was the ability to select one “Favorite Review” that Yelp bumped to the top of the business’s page, albeit with a notice that the business has specially selected it for viewing. To explain the observed shifts in review presence and order, Yelp has pointed to their automated “review filter,” which blocks reviews from less “established” users until they become more established. If a user becomes more established, her reviews will suddenly appear; if she becomes less established, they might disappear. Yelp credits much of its success to the filter, which they say prevents the dual plague of false positive reviews from business owners and false negative reviews from their competitors.

This April, however, Yelp made two changes to its policies: It eliminated the “Favorite Review” benefit for advertisers, and it enabled users to click through to see reviews that are at the moment filtered out of a business’s page. These modifications were no confession, framed instead as ways to “reinforce that trust” that underlies Yelp’s success.

The deceptive-trade-practices prong of Dr. Reit’s case was based on alleged discrepancies between Yelp’s claims in its Business Owner’s Guide that its review sorting is “entirely automated to avoid human bias” and the reality Reit painted in his complaint. The court was cursory in its disposal of this charge. Under New York common law, to qualify as a “deceptive practice,” business conduct must first be “consumer-oriented” and “materially misleading to a reasonable consumer.” Since Yelp’s allegedly deceptive conduct was directed toward other businesses, not to consumers per se, Dr. Reit’s claim did not meet these threshold requirements and thus had to be dismissed. Because the plaintiff apparently didn’t raise the issue, the court did not address the idea that businesses are in fact the only possible “consumers” of Yelp’s advertising service, or the possibility that Yelp also makes user-directed claims of being entirely uninfluenced by “human bias.”


The court’s analysis of Reit’s defamation claim against Yelp was more extensive, but ultimately just as simple. And, this time, rightfully so: Section 230 of the Communications Decency Act, a federal law passed in 1996, grants broad immunity to websites that host third-party speech from lawsuits based on that speech. Except for copyright and federal criminal law, Section 230 covers just about every other kind of legal claim, largely freeing providers of “interactive computer services” from worries that they’ll be held responsible in court as the publishers of the stuff—the reviews, the comments, the tweets, the status updates—that their users post. Here, since it was Michael S. who allegedly defamed Dr. Reit, rather than Yelp itself, Section 230 blocked the attempt to hold Yelp responsible for Michael S.‘s speech.

Advocates for free expression on the internet often tout Section 230 as a rare triumph of Congressional foresight, allowing the web to remain wild and free, fostering at least potentially a true marketplace of ideas. In effect—and ignoring other developments in internet law and business that have censored and distorted that marketplace—this may be true. But Congress’s original intentions in approving Section 230 were in fact more aligned with family values than with civil liberties.

For ordinary consumers in the early ’90s, what going online felt like depended largely on what service provider you used. That’s because what you gained access to by “going online” was not the Web as we now know it, but a proprietary network run by your provider, often consisting of themed “electronic bulletin boards,” in the parlance of the times. Prodigy was one of these networks, and it distinguished itself from competitors like CompuServe by exercising relatively more editorial discretion over the contents of its boards. In a 1990 New York Times editorial defending its screening policies, Prodigy’s marketing director billed the company as a “family service” that “make[s] no apology for pursuing a value system that reflects the culture of the millions of American families we aspire to serve.” The editorial began with examples of what Prodigy filtered out: solicitations for suicide advice, accusations of embezzlement, confessions of pederastic desire, directions for stealing cable. The heavy stuff.

In 1995, Prodigy was sued for libel by Stratton Oakmont, Inc., an investment banking firm. On the popular “Money Talk” Prodigy forum, someone had accused Stratton of “100% criminal fraud” in relation to one of its IPOs and characterized it as a “cult of brokers who either lie for a living or get fired.” For the case against Prodigy to move beyond its initial stages, a New York state court had to determine whether Prodigy was the “publisher” of those statements for the purposes of a libel claim. The court held that Prodigy was indeed a publisher, distinguishing it from a prior case in which it had held that competitor CompuServe was a mere conduit of information. Crucial to the court’s distinction was Prodigy’s public presentation as “controlling the content” of its bulletin boards, and its partial execution of that promise through automated screening software and editorial “Board Leaders.” The court was careful to point out that its decision was not a blanket rule; it was only saying that “to the extent computer networks provide such [“family-oriented”] services, they must also accept the concomitant legal consequences.” In other words, under Stratton-Oakmont, the less you tried to police content, the less accountable you were for it. (Prodigy and Stratton-Oakmont settled out of court before the case went to full trial.)

The legal incentive not to screen created by Stratton-Oakmont worried policy makers in Congress, whose constituents were in the midst of the mid-‘90s hysteria over the many dangers (mostly sexual) that the internet posed to children. The legislative response eventually emerged as Section 230 of the broader Communications Decency Act. The key language of Section 230 is in 230(c)(1), which establishes a broad, unqualified rule: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another content provider.” The rule did not have to be structured this way. It could have instead conditioned immunity on the fulfillment of certain actions. Take for example this hypothetical 230: a website shall not be treated as the publisher or speaker of third party content so long as it implements a policy through which unlawful content would be promptly removed upon user allegations of illegality. In 1998, the Digital Millennium Copyright Act’s “safe harbor” provision implemented this sort of intermediary liability scheme for online copyright infringement. For example, if a YouTube user posts a funny morning news blooper, and the owner of the copyright in the news show notifies YouTube of that infringement, YouTube can guarantee immunity from any lawsuit based on the infringement only if it “responds expeditiously to remove” the content. It’s unsurprising that Section 230 wasn’t written as warily as the DMCA, because in the former case Congress’s focus was so drawn to the “Good Samaritans”—those services who wished to screen content but were put at risk by potential liability. As a result, a statute meant largely to engender a family-friendly web managed to allow exactly the opposite. You could screen for offensive, abusive, and defamatory speech if you wanted to, and were free from liability if something slipped through; but you were also free from liability if everything slipped through, if you didn’t screen for anything.


And yet, despite the breadth of Section 230, people like Dr. Reit still bring suits against websites like Yelp trying to treat them as the publisher or speaker of third-party content. To explain these lawsuits, one might simply point to lazy lawyering. But these plaintiffs are also pressing on real indeterminacy within the statute. Section 230 was drafted well before the rise of “Web 2.0,” when the distinction between a website and its users—or, in the statute’s own terms, the category of “information provided by another content provider”—became harder to discern.

In the 2008 case Fair Housing Council of San Fernando Valley v. Roommates.com, the Ninth Circuit retrofitted Section 230 to this more porous internet through some judicial-interpretive handiwork. The case revolved around the eponymous website, a roommate matching service. To join the site, each user was required to answer a variety of questions that became part of the user’s profile page. Crucially, the user was required to select her sex, sexual orientation, and family status and the preferred sex, sexual orientation, and family status of her next roommate from drop-down menus. The Fair Housing Council of San Fernando Valley sued the site for violations of the Fair Housing Act and California’s fair housing law, which together bar discrimination based on those very things. Roommates.com claimed Section 230 immunity, asserting that under the statute it could not be held responsible for the discriminatory preferences stated by its subscribers. On April 3, 2008, the court rejected the website’s argument, concluding that Roommates, with its mandatory profile and its pre-populated drop-down menus, had too much of a hand in the production of the unlawful content for that content to count as “provided by another.”

This ruling makes sense. What Roommates.com did would be a bit like Facebook requiring you to post a status about your friend, giving you a drop-down menu of defamatory predicates to choose from—“had sex with her boss for a promotion,” “stole my TV”—and then trying to use Section 230 to escape liability for defamation. But a court has to hang its hat on some legal reasoning, and in a case based on a statute, this hat-hanging usually comes from a close reading. Take another look at 230(c)(1):

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another content provider [emphasis added].

The court’s first move was to remind us that 230 immunity is only triggered if the website at issue doesn’t itself count as a “content provider” with regard to the information at issue. Next, the court looked at the statutory definition of “information content provider”:

The term “information content provider” means any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the internet or any other interactive computer service [emphasis added].

Here, the court stressed that a website can count as a content provider even if it just develops the content in part. So the question became whether Roommates.com, in requiring users to select from a predetermined set of discriminatory answers, “developed” those answers “in part,” thus qualifying as “content providers” of that information. To answer this question, the court had to define “develop,” and here the statute gave no guidance. If the court defined “develop” too broadly, it would effectively gut Section 230 from the inside out, since nearly any involvement with the third-party content could count as “developing” it. If it defined “develop” too narrowly, it risked committing the interpretive sin of making statutory language superfluous, since its meaning would overlap entirely with “creation.” The court settled on this:

A website helps to develop unlawful content, and thus falls [outside Section 230], if it contributes materially to the alleged illegality of the conduct.

On first glance, you think: Ah, the distinctive anticlimax of judicial reasoning. Since “contributes materially” is hardly less amorphous than “develop,” the new judicial gloss would not bind Ninth Circuit courts very far beyond the specific facts of the Roommates case. And due to the branched architecture of the federal court system, courts outside the Ninth Circuit aren’t bound by the case at all—though the opinion does constitute “persuasive authority,” most of all in lower courts whose own circuit judges have yet to speak on the matter. It’s in this more diffuse way that the case has exerted and will continue to exert its influence. Most important, Roommates lends credence to the idea that the structure of a website—specifically, the methods by which it solicits and obtains content from third parties—can have a determinative effect on whether it is immune from liability based on that content.

Through the lens of Roommates, Dr. Reit’s defamation case against Yelp still seems easy. All Yelp did was give Michael S. a wide-open text box to fill to his heart’s content. Unlike the Roommates.com drop-down, Yelp’s structure didn’t force Michael to say anything, let alone something allegedly defamatory. One need not be too imaginative, though, to see the pervasive culture of a website as “contributing materially” to the illegal content produced by its users, and thus falling outside Roommates‘s conception of Section 230 immunity. Even under this interpretation, Yelp would probably be ok: Its corporate tone is more exclamation-point-heavy, optimistic millennial than anything else. But a website that pretty explicitly places a premium on insulting/defamatory comments from users may be in shakier territory. So far, though, most courts citing Roommates haven’t gone down this path, interpreting the case as applying only where a website gives users no other option than to act unlawfully. In fact, in 2009 two federal district courts thwarted attempts by state and county law enforcement to hold Craigslist responsible for the illegal content posted in its “Adult Services” section; in doing so, one of the courts expressly rejected the Roommates-based argument that the very words “Adult” and “Services” constitute the sort of inducement sufficient to clear Section 230’s “development” hurdle.

If this narrow interpretation of Roommates sticks, content-hosting websites may continue to enjoy the broad legal protection that Section 230 bestowed on them fourteen years ago. Of course, legal protection is not the same as political protection. Despite its Section 230-based wins in court, Craigslist recently shut down its Adult Services section for good, amidst increasing public pressure to do so from seventeen state attorneys general. Massachusetts A. G. Martha Coakley, who prides herself on having “spent [her] career keeping kids safe,” has gone as far as to suggest amending Section 230 to reduce or eliminate immunity. Seeing as the profitability of the social internet depends in part on the statute’s immunity provisions, any like-minded congressperson would face mammoth opposition from the likes of MySpace, Facebook, and Google.

Image: "Scales of Justice." Image from public-domain-image.com.

Subscribe!

It’s the right thing to do.

Sign up now to start receiving the magazine that believes history isn't over just yet.

Subscribe now »

  • Andrew Jacobs
    • Berman’s Children

      Efforts to designate the Prospect Heights Historic District began in 2006 and came to fruition in the summer of 2009. The Yards, in some sense, created the District. More…