A Lawsuit that reshaped the Internet

March 15, 2024

Stratton Oakmont versus Prodigy is a lawsuit that happened when I worked there and had major impacts on the online lives of AOL, CompuServe, and Prodigy users then, but these issues haven’t gone away.  A single message board post ignited a legal firestorm.  The implication of the lawsuit and subsequent legislation dramatically impacts what Internet content providers like Facebook (Meta), TikTok, and Twitter (now X) can do now and how they do it.

My Prodigy ID Badge with name misspelled.

Here is what happened: One of Prodigy’s most popular features was its bulletin boards.  There were hundreds of boards with topics on every subject imaginable.  In the financial topics discussion board “Money Talk,” a member referred to Long-Island-based-securities-brokerage-firm, Stratton Oakmont, as “a bunch of crooks.” The Prodigy user said they’d committed fraud. Our bulletin board leader, an outside contractor at that time, did not remove the post, and they sued us.

Upon hearing of the lawsuit, I called a Wall Street friend, asking if he’d heard of Stratton Oakmont.  His response left me dumbfounded. Before I could even finish my question, he cut me off with a gruff chuckle, “Stratton Oakmont? Yeah, everyone knows those crooks…” As head of Prodigy’s communications products line of business, I knew we had a problem and headed down to our chief legal counsel’s office, George Perry. Partially tongue-in-cheek I suggested, “George, perhaps we should argue the case on its merits.”

Prodigy’s Bulletin Boards had issues on many levels:

  • Originally envisioned as a sideline, this user-to-user content had become one of the most popular features of our nearly 2 million subscribers.
  • Sears and IBM had always marketed Prodigy as a “family-oriented” service and had codes of conduct and rules for members. This was most visible on the Bulletin Boards, where each posting was screened for dirty words (there was an always-evolving “dirty word” list, and I still have a copy of it).
  • The “scanners” divided postings into one of 3 categories: A. White: No problem (they were immediately posted), B. Black: Contains banned words, (they were returned to the poster saying it was unacceptable), or C: Gray. The grays were the problem.  The scanner wasn’t sure and a real human being needed to review it and make a judgment call.
  • This was fine when it was a few thousand posts a day, but a different matter when it was closer to hundreds of thousands a day. As the Bulletin Board grew, this group of human editors had grown from a handful to a full-time, 24X7 team of people, with at least 15-20 of them on duty. And they were still swamped. Grays had to go into a BLACK (return to the user to re-word) or WHITE (post as submitted) within a certain number of hours.
  • The scanning group also had to update the rules constantly: “Bitch,” unacceptable nearly everywhere, was fine on the Pets Bulletin Board. Users demanded an “adults-only bulletin board,” where rules were more liberal and you had to promise you were over 18 years old and wouldn’t be offended by the content.
  • The number of “grays” requiring experienced managers to get involved to make tough calls and the difficulty in making the right calls kept growing.
  • The Stratton Oakmont case made the company realize it could be liable for user-posted content if it actively monitored its bulletin boards. Eventually, on my watch, we eliminated our moderation practices, appointed volunteers to lead the various bulletin boards, and allowed each bulletin-board community to establish their own guidelines and enforcement policies, without our involvement. Non-employee volunteers were given the tools to enforce the community standards, and the company got out of it, dramatically cutting costs and making many on the bulletin board users happy.

But then the lawsuit came.  Like most lawsuits, this one was not unexpected.  There was a plan, and the eventual result was the passing of the Communications Decency Act in 1996, which would establish immunity for internet service providers for publishing “information provided by another information content provider.  The US House of Representatives explicitly stated that with this act, they intended to overturn the result reached in the Prodigy case.

I had the chance to lobby for the passage of this act with Prodigy’s head of PR, Brian Ek, along with some hired experts. The briefings were in various Washington DC elected officials’ offices, where everyone listened respectfully, asked good questions, and took notes.  None of the meetings had a Senator or Representative attending, only the senior staff. Tiring after nearly a full day of these meetings and explaining why their vote on the act was so important, one senior staffer pulled me aside from the rest and said, “Your presentation was good, but you missed the most important bit of information.” I waited, and he continued: “You failed to tell us how many votes my Representative is going to gain (or lose) in his district if he votes the way you recommend.” As I sat in that Washington office, listening to the staffer’s blunt words, my frustration boiled over. All our careful arguments, all our hard work, and it came down to pure politics? Biting my tongue to keep from saying something I’d regret, he said: “Hey, I can see what you’re thinking, but that’s Democracy my friend, and if you don’t like it, you should move to another country.”

In the lawsuit, Stratton Oakmont had argued that we were in fact a publisher and one who had published defamatory material and was therefore liable for the postings under the common law definition of defamation. Prodigy requested a dismissal because we could not be held liable for the content of postings created by our third-party members/users.

In May of 1995, the court held that Prodigy was liable as the publisher of the content created by its users because we had exercised editorial control over the messages on our bulletin boards.  We had A. established content guidelines for users;  B. we enforced those guidelines with our employee “Board Leaders”; and C.) we used scanning software designed to remove offensive language.

Internet enthusiasts jumped to our defense. They pointed out that expecting website operators to accept liability for the speech of third-party users was crazy It was impossible to enforce and would likely limit the development of the Internet.  Eventually, the government got involved and passed what would become known as the Communications Decency Act of 1996.

This issue has never fully gone away, continuing in the courts and legislatures. Today it’s taken on a political tint, with Florida and Texas writing their own laws forbidding social media from “censoring“ favored causes and individuals.  In February of this year, the Washington Post reported about an incident on the Internet’s leading Star Trek forum. Users are supposed to abide by a simple rule: “Be Nice.” When a user called one of the characters a “soy boy,” (a term insulting someone’s masculinity), the board’s volunteer moderator kicked him out. But the user filed a lawsuit based on the new Texas law prohibiting social media companies from removing posts or accounts based on a viewpoint – an unprecedented regulation subverting how the internet has been operating. And its heading to the Supreme Court.

Joe Heller brilliantly captures the confusion some have with this issue.

TikTok is also under fire, with its Chinese ownership getting into the mix.  Although some claims are unproved or disputed, they include forcing users to contact their members of Congress and having algorithms pushing Chinese Communist Party propaganda to US users. But wouldn’t a ban violate the First Amendment?

Once again, it will be the Supreme Court that decides if states can control the content moderation policies of large social media platforms like Facebook and Twitter. Texas and Florida’s laws prohibit these platforms from removing posts or accounts based on viewpoint. The tech industry argues the laws violate the First Amendment, giving the government too much control over online speech.

Competing principles of free speech lead the arguments. Tech companies liken themselves to publishers with editorial discretion, while the states see them as modern “common carriers” that should be regulated like utilities. The ruling will likely have sweeping impacts – upholding the laws could usher in a patchwork of state regulations making content moderation difficult, while striking them down could give platforms legal cover to remove more content.

It’s an important principle of American democracy at stake here especially in the midst of rampant online misinformation. The court must chart a nuanced path forward for the internet’s emerging leadership role in political speech. Good luck to them – it’s a tough one, I know, I lived it.

Bookmark the permalink.

One Response to A Lawsuit that reshaped the Internet

  1. Bill B. says:

    Thanks, Steve, Missed you at lunch on Thu. I will read this again and follow the rest of the links.

    The word “conundrum” was coined for this situation. A Chinese company that spreads propaganda of a foreign adversary and collects data on American citizens needs to be controlled. Right? We arrest human spies and “disinformation agents” when we catch them. There is a [the?] star witness in the Biden Impeachment Inquiry under arrest for lying to the FBI and having ties to Russian spy agencies.

    If social media companies and ISPs are “common carriers”, they can’t be in the business of monitoring content; we can say any stupid, illegal (ex; planning a heist), or profane thing we want over the phone, and any liability is only on us personally. On the other, hand, in a public meeting, people can be ejected for being rude, abusive, profane, or simply out of order. And your grocery store or gas station can take down index cards from their cork board by the Coke machine offering “services” they deem inappropriate.

    Furthermore, anything that “amplifies” content which has a demonstrated contribution to pushing, solidifying, or polarizing people into a particular point of view, referred to by some as “radicalization” is not good for our democratic norms and systems, and doesn’t need a social science research department to identify the effects. (Are you listening Mark Zuckerberg?) The “common carrier” model would eliminate many issues, and would prohibit/eliminate “amplification” and other algorithms, but on the flip, lack of filtering could turn the internet and social media milieu into more of a cesspool than they already are.

    A “common carrier” model generally involves only two parties, with whom the “content” remains private unless they choose to discuss it with others. Even the now ubiquitous “conference call” or Zoom meeting content remains relatively “private”, and usually has a moderator who can mute or disconnect those who don’t play by the rules. (I have never heard of a First Amendment suit resulting from conference call moderation.)

    This is where I believe the “common carrier” model breaks down; Bob telling Joe his viewpoint is imbecilic [on a phone call] stays between Bob and Joe unless they choose to discuss their conversation with others. If Bob posts the same opinion on social media, the private conversation could now have hundreds of millions of viewers, and otherwise uninvolved users could be drawn into the fray if the Zuck’s amplification algorithm calculates they would find it “engaging”. A private difference of opinion turns into a [virtual?] street brawl.

    Conundrum. I don’t have a solution.

    Thanks for this enlightening post. I will think about it, and more than likely seek to discuss it with you again.

    By the way, you have a talent for finding yourself in the middle of things… Bill

Leave a Reply