A few days ago, I visited The Wall Street Journal’s site and received a pop-up message with the title “Elevating Our Discourse.”
The Journal has elected to make commenting a benefit offered exclusively to subscribers (like me). Not every Journal article will include a comment section, and those that do will close comments after 48 hours. In its updated Rules and FAQs, the Journal specifies that commenters must use their real names; must be “civil and respectful and stay focused on the subject at hand”; and must refrain from including self-promotional links or other spam, among other similar guidelines.
Like many other major news organizations, the Journal has decided that the best way to ensure high-quality reader participation on its site is to involve human moderators who can review reports of abuse, proactively head off potentially abusive comments and flag exemplary comments as potential “featured” responses. All of this takes a lot of work, but it means that readers can generally trust that looking at an article’s comments will yield intelligent, thoughtful debate – or, at the very least, that it will not yield a wave of Caps Lock, profanity and hate speech.
The Journal’s approach is echoed in many other publications. The New York Times closes comments after 24 hours; The Washington Post waits longer – 14 days – but reserves the right to shut down discussion sooner if its quality devolves. Neither solicits comments on every article published. The South Florida Sun Sentinel notes that it only moderates comments on certain articles, when “the comment thread has gotten out of hand or [they] suspect it might.” The Los Angeles Times, instead of using a featured comment system, rewards positive commenter behavior with points, which can be used to promote a particular comment. All of these publications require registration in order to comment on an article, though none of them require commenters to be paid subscribers. (They also have less robust paywalls than the Journal, which means more nonsubscribers may end up reading articles in the first place.) Even the Journal’s decision to go subscriber-only is not entirely unprecedented; the Canadian publication The Globe and Mail made the same change last year.
The long-standing commenting policy on Palisades Hudson’s website is similar in some ways to the approach taken by the Journal and other publications. Our comments are always moderated, and we ask that commenters keep their remarks on topic and civil in tone. We also ask for the writer’s full name. Like many of the publications I’ve mentioned, we do not open comments on every article on our site, and we do not leave comments open indefinitely, in order to keep moderation work manageable. But within these boundaries, we find that our writing is better for incorporating the perspectives of our readers – some of whom we know, but many of whom we do not.
In the early days, our moderators had an even more arduous job, as they had to sift through not only genuine comments but also large quantities of spam, much of it bot-generated. These days, however, there are tools to filter out the most obvious attempts to flood comment sections with low-quality links. Many other sites use additional tools to ensure commenters are human, including CAPTCHAs, site-specific logins or requiring commenters to opt in to a social-media-based system that attributes their comments to a Facebook or Google profile.
Even as technology advances, human moderators remain essential. As Ryan Broderick recently noted on BuzzFeed, a good moderator is “like combining a sheriff and a librarian.” It is a nuanced and sometimes difficult job, but the payoff is a much better online experience. Much of this is simply good website citizenship – making sure that visitors to your site find the comments useful and engaging.
Lately, however, there has been a trend toward requiring websites to bear responsibility for all content they publish, including user-generated content. In Europe, this has recently focused on copyright, rather than hate speech or defamation. Article 13 of the EU Copyright Directive, which was finalized in February, states that services can be held responsible if users upload copyright material without authorization.
However, emphasis shifted again after the recent mass shootings at two New Zealand mosques, which the shooter livestreamed on Facebook. The recording has proven difficult to scrub from the internet. Australia quickly enacted legislation to hold social media platforms responsible for violent content that is not immediately removed, and the government proposed more far-reaching regulation of hateful online content. Regulators in the United Kingdom have proposed a similar crackdown.
Here in the United States, Section 230 of the Communications Decency Act generally protects internet businesses that host content created by third parties from potential legal liability for that content. Some lawmakers have called for changes to Section 230, which opponents say will strike a serious blow against online infrastructure, while supporters claim it will make major platforms fairer and more neutral.
It is in this context we might consider YouTube’s most recent struggle against abusive and potentially exploitive comments on videos featuring children. News broke that a group of users were regularly leaving inappropriate comments on such videos, including some with timestamps and details noting when the children in the videos were in stages of undress or assumed positions the commenters found suggestive. In response, YouTube – which is owned by Google – disabled commenting on all videos featuring children younger than 13, as well as on some videos featuring children between 13 and 18, depending on the video’s content. They allowed exceptions for “a small number of channels that actively moderate their comments and take additional steps to protect children.” YouTube’s current moderation strategy is based on a combination of machine learning and human reviewers.
Going forward, it might make sense for YouTube to require all channel owners to moderate comments on their own videos and to institute consequences such as blocking or removing creators who permit inappropriate messaging. Such a system would need clear rules and a transparent system for appealing one’s case to a human, since algorithms sometimes miss nuances people can easily spot. For a company the size of YouTube, however, these factors should not be prohibitive. In fact, Google has already rolled out a tool called Perspective that uses artificial intelligence to block potentially toxic comments automatically. (The AI is also available to Chrome users through the “Tune” browser extension.)
Creators who do not have the time or resources to effectively moderate their comment sections should have the option to turn off comments for their videos altogether, which would not necessarily be a terrible outcome. While comments allow fans and creators to connect, many YouTubers have active social media presences that offer fans alternate ways to interact. Creator-focused platforms like Patreon also allow for more easily controlled discussions.
Trolls, spammers and other abusive users contribute nothing, while depriving the rest of us of the opportunity to engage in rational and informative discussions without having to hold our noses as we wade through their muck. If a site bothers to allow comments at all, it should take responsibility for what it publishes, just as print media always have. That doesn’t mean doing away with user-generated content entirely. It just means doing away with troll-generated junk.
Larry M. Elkin is the founder and president of Palisades Hudson, and is based out of Palisades Hudson’s Fort Lauderdale, Florida headquarters. He wrote several of the chapters in the firm’s recently updated book,
The High Achiever’s Guide To Wealth. His contributions include Chapter 1, “Anyone Can Achieve Wealth,” and Chapter 19, “Assisting Aging Parents.” Larry was also among the authors of the firm’s previous book
Looking Ahead: Life, Family, Wealth and Business After 55.
Posted by Larry M. Elkin, CPA, CFP®
photo by Lorie Shaull
A few days ago, I visited The Wall Street Journal’s site and received a pop-up message with the title “Elevating Our Discourse.”
The Journal has elected to make commenting a benefit offered exclusively to subscribers (like me). Not every Journal article will include a comment section, and those that do will close comments after 48 hours. In its updated Rules and FAQs, the Journal specifies that commenters must use their real names; must be “civil and respectful and stay focused on the subject at hand”; and must refrain from including self-promotional links or other spam, among other similar guidelines.
Like many other major news organizations, the Journal has decided that the best way to ensure high-quality reader participation on its site is to involve human moderators who can review reports of abuse, proactively head off potentially abusive comments and flag exemplary comments as potential “featured” responses. All of this takes a lot of work, but it means that readers can generally trust that looking at an article’s comments will yield intelligent, thoughtful debate – or, at the very least, that it will not yield a wave of Caps Lock, profanity and hate speech.
The Journal’s approach is echoed in many other publications. The New York Times closes comments after 24 hours; The Washington Post waits longer – 14 days – but reserves the right to shut down discussion sooner if its quality devolves. Neither solicits comments on every article published. The South Florida Sun Sentinel notes that it only moderates comments on certain articles, when “the comment thread has gotten out of hand or [they] suspect it might.” The Los Angeles Times, instead of using a featured comment system, rewards positive commenter behavior with points, which can be used to promote a particular comment. All of these publications require registration in order to comment on an article, though none of them require commenters to be paid subscribers. (They also have less robust paywalls than the Journal, which means more nonsubscribers may end up reading articles in the first place.) Even the Journal’s decision to go subscriber-only is not entirely unprecedented; the Canadian publication The Globe and Mail made the same change last year.
The long-standing commenting policy on Palisades Hudson’s website is similar in some ways to the approach taken by the Journal and other publications. Our comments are always moderated, and we ask that commenters keep their remarks on topic and civil in tone. We also ask for the writer’s full name. Like many of the publications I’ve mentioned, we do not open comments on every article on our site, and we do not leave comments open indefinitely, in order to keep moderation work manageable. But within these boundaries, we find that our writing is better for incorporating the perspectives of our readers – some of whom we know, but many of whom we do not.
In the early days, our moderators had an even more arduous job, as they had to sift through not only genuine comments but also large quantities of spam, much of it bot-generated. These days, however, there are tools to filter out the most obvious attempts to flood comment sections with low-quality links. Many other sites use additional tools to ensure commenters are human, including CAPTCHAs, site-specific logins or requiring commenters to opt in to a social-media-based system that attributes their comments to a Facebook or Google profile.
Even as technology advances, human moderators remain essential. As Ryan Broderick recently noted on BuzzFeed, a good moderator is “like combining a sheriff and a librarian.” It is a nuanced and sometimes difficult job, but the payoff is a much better online experience. Much of this is simply good website citizenship – making sure that visitors to your site find the comments useful and engaging.
Lately, however, there has been a trend toward requiring websites to bear responsibility for all content they publish, including user-generated content. In Europe, this has recently focused on copyright, rather than hate speech or defamation. Article 13 of the EU Copyright Directive, which was finalized in February, states that services can be held responsible if users upload copyright material without authorization.
However, emphasis shifted again after the recent mass shootings at two New Zealand mosques, which the shooter livestreamed on Facebook. The recording has proven difficult to scrub from the internet. Australia quickly enacted legislation to hold social media platforms responsible for violent content that is not immediately removed, and the government proposed more far-reaching regulation of hateful online content. Regulators in the United Kingdom have proposed a similar crackdown.
Here in the United States, Section 230 of the Communications Decency Act generally protects internet businesses that host content created by third parties from potential legal liability for that content. Some lawmakers have called for changes to Section 230, which opponents say will strike a serious blow against online infrastructure, while supporters claim it will make major platforms fairer and more neutral.
It is in this context we might consider YouTube’s most recent struggle against abusive and potentially exploitive comments on videos featuring children. News broke that a group of users were regularly leaving inappropriate comments on such videos, including some with timestamps and details noting when the children in the videos were in stages of undress or assumed positions the commenters found suggestive. In response, YouTube – which is owned by Google – disabled commenting on all videos featuring children younger than 13, as well as on some videos featuring children between 13 and 18, depending on the video’s content. They allowed exceptions for “a small number of channels that actively moderate their comments and take additional steps to protect children.” YouTube’s current moderation strategy is based on a combination of machine learning and human reviewers.
Going forward, it might make sense for YouTube to require all channel owners to moderate comments on their own videos and to institute consequences such as blocking or removing creators who permit inappropriate messaging. Such a system would need clear rules and a transparent system for appealing one’s case to a human, since algorithms sometimes miss nuances people can easily spot. For a company the size of YouTube, however, these factors should not be prohibitive. In fact, Google has already rolled out a tool called Perspective that uses artificial intelligence to block potentially toxic comments automatically. (The AI is also available to Chrome users through the “Tune” browser extension.)
Creators who do not have the time or resources to effectively moderate their comment sections should have the option to turn off comments for their videos altogether, which would not necessarily be a terrible outcome. While comments allow fans and creators to connect, many YouTubers have active social media presences that offer fans alternate ways to interact. Creator-focused platforms like Patreon also allow for more easily controlled discussions.
Trolls, spammers and other abusive users contribute nothing, while depriving the rest of us of the opportunity to engage in rational and informative discussions without having to hold our noses as we wade through their muck. If a site bothers to allow comments at all, it should take responsibility for what it publishes, just as print media always have. That doesn’t mean doing away with user-generated content entirely. It just means doing away with troll-generated junk.
Related posts:
The views expressed in this post are solely those of the author. We welcome additional perspectives in our comments section as long as they are on topic, civil in tone and signed with the writer's full name. All comments will be reviewed by our moderator prior to publication.