The former chief executive of the parent company of MoviePass, Theodore Farnsworth, pleaded guilty to charges of securities fraud and conspiracy after being accused of misleading investors over the service’s “unlimited plan.” Farnsworth also pleaded guilty to conspiracy to commit securities fraud as the chief executive of Vinco Ventures, a publicly traded company. Helios & […]
A judge has officially approved a settlement in a case brought by Tesla shareholders against board members who will now have to return stock, cash, and give up on stock options worth a total of nearly $1 billion.
The people overseeing the security of Google’s Chrome browser explicitly forbid third-party extension developers from trying to manipulate how the browser extensions they submit are presented in the Chrome Web Store. The policy specifically calls out search-manipulating techniques such as listing multiple extensions that provide the same experience or plastering extension descriptions with loosely related or unrelated keywords.
On Wednesday, security and privacy researcher Wladimir Palant revealed that developers are flagrantly violating those terms in hundreds of extensions currently available for download from Google. As a result, searches for a particular term or terms can return extensions that are unrelated, inferior knockoffs, or carry out abusive tasks such as surreptitiously monetizing web searches, something Google expressly forbids.
Not looking? Don’t care? Both?
A search Wednesday morning in California for Norton Password Manager, for example, returned not only the official extension but three others, all of which are unrelated at best and potentially abusive at worst. The results may look different for searches at other times or from different locations.
Search results for Norton Password Manager.
It’s unclear why someone who uses a password manager would be interested in spoofing their time zone or boosting the audio volume. Yes, they’re all extensions for tweaking or otherwise extending the Chrome browsing experience, but isn’t every extension? The Chrome Web Store doesn’t want extension users to get pigeonholed or to see the list of offerings as limited, so it doesn’t just return the title searched for. Instead, it draws inferences from descriptions of other extensions in an attempt to promote ones that may also be of interest.
In many cases, developers are exploiting Google’s eagerness to promote potentially related extensions in campaigns that foist offerings that are irrelevant or abusive. But wait, Chrome security people have put developers on notice that they’re not permitted to engage in keyword spam and other search-manipulating techniques. So, how is this happening?
One way is by abusing a language translation feature built into the extension description system. For reasons that aren’t clear, Google allows descriptions to be translated into more than 50 different languages. Rather than blanket a description with a wall of text in the language of users the developers want to target, they stash it in the description of an alternative tongue. Developers trying to reach Europeans often “sacrifice” some Asian languages such as Bengali, Palant said. Developers targeting Asians, by contrast, tend to choose European languages like Estonian.
Even when a description is tailored to a specific language, the keywords included get swept into descriptions for other languages. This allows developers to plaster tens of thousands of misleading keywords into descriptions without the appearance they run afoul of Google policies.
Apparently, some extension authors figured out that the Chrome Web Store search index is shared across all languages. If you wanted to show up in the search when people look for your competitors for example, you could add their names to your extension’s description—but that might come across as spammy. So what you do instead is sacrificing some of the “less popular” languages and stuff the descriptions there full of relevant keywords. And then your extension starts showing up for these keywords even when they are entered in the English version of the Chrome Web Store. After all, who cares about Swahili other than maybe five million native speakers?
An example of this technique in action can be found in the extension using the name Charm - Coupons, Promo Codes, & Discounts. When viewed in languages including English, the description is concise and gives the impression of a legitimate, privacy-focused extension for receiving discounts.
Viewing the entire descriptions file the developers provided to Google tells a very different story. Descriptions specified for languages such as Armenian, Bengali, and Filipino list the extension names as "RetailMeNot Retail Me Not Fakespot Fake spot Slickdeals," "promo code The Camelizer wanteeed Cently Acorns Earn," and "Coupert Karma CouponBirds Coupon Birds Octoshop discount." The name in Telugu even invokes the names of PayPal and CNET, both of whom develop competing extensions.
Description showing extension names.
More misleading still are keywords loaded into language-specific long descriptions. There are more than 18,000 of them. The keywords aren’t displayed when viewing the description in most languages, but they nonetheless affect the results of extension searches in the Chrome Web Store.
A small sampling of more than 18,000 keywords for the extension
Palant identified 920 Chrome extensions that use the technique. He traced them back to a handful of “clusters,” meaning those that appear to come from related developers. They are:
Palant said most of the extensions used other approaches to manipulate Chrome Web Store placement, including: using competitors’ names, using different names for the same extension, and keywords within or at the end of descriptions.
In an interview, Palant said he has alerted Google to these sorts of coordinated manipulations in the Chrome Web Store in the past. And yet, they persist and are easy to spot by anyone with an interest in doing so.
“Google isn’t monitoring spam,” he wrote. “It wasn’t that hard to notice, and they have better access to the data than me. So either Google isn’t looking or they don’t care.” Google didn’t respond to an email asking if it's aware of the spam or has plans to stop it.
In early December I got the kind of tip we’ve been getting a lot over the past year. A reader had noticed a post from someone on Reddit complaining about a very graphic sexual ad appearing in their Instagram Reels. I’ve seen a lot of ads for scams or shady dating sites recently, and some of them were pretty suggestive, to put it mildly, but the ad the person on Reddit complained about was straight up a close up image of a vagina.
The reader who tipped 404 Media did exactly what I would have done, which is look up the advertiser in Facebook’s Ad Library, and found that the same advertiser was running around 800 ads across all of Meta’s platforms in November, the vast majority of which are just different close-up images of vaginas. When clicked, the ad takes users to a variety of sites for "confidential dating” or “hot dates” in your area. Facebook started to remove some of these ads on December 13, but at the time of writing, most of them were still undetected by its moderators according to the Ad Library.
Like I said, we get a lot of tips like this these days. We get so many, in fact, that we don’t write stories about them unless there’s something novel or that our readers need to know about them. Facebook taking money to put explicit porn in its ads despite it being a clear violation of its own policies is not new, but definitely a new low for the company and a clear indicator of Facebook’s “fuck it” approach to content moderation, and moderation of its ads specifically.
AI Forensics, a tech platform and algorithmic auditing firm, today put out a report that quantifies just how widespread this problem is. It found over 3,000 pornographic ads promoting “dubious sexual enhancement products” which generated over 8 million impressions over a year in the European Union alone.
In an attempt to show that the ads didn’t use some clever technique to bypass Meta’s moderation tools, AI Forensics uploaded the exact same visuals as standard, non-promoted posts on Instagram and Facebook, and they were removed promptly for violating Meta’s Community Standards.
“Our findings suggest that although Meta has the technology to automatically detect pornographic content, it does not apply it to enforce its community standards on advertisements as it does for non-sponsored content,” AI Forensics said in its report. “This double standard is not a temporary bug, but persisted since as early as, at least, December 2023.”
When we write about this problem with Facebook’s moderation we always stress that there’s nothing inherently alarming about nudity itself on social media. The problem is that the policy against it is blatantly hypocritical because it often bans legitimate adult content creators, sex workers, and sex educators who are trying to play by the platform’s rules, while bad actors who don’t care about Facebook’s rules find loopholes that allow them to post all the pornography they want. Additionally, that pornography is almost always stolen from the same legitimate creators who Facebook polices so heavily, the ads are almost always for products and services that are trying to scam or take advantage of the audience Facebook is allegedly trying to protect, and in some cases promote tools for creating nonconsensual pornography.
What’s adding insult to injury right now is that in addition to Facebook’s hypocrisy I lay out above, Facebook is now punishing us for publishing stories about this very problem.
In October, I published a story with the headline When Does Instagram Decide a Nipple Becomes Female, in which artist Ada Ada Ada tests the boundaries of Instagram’s automated and human moderation systems by uploading a daily image of her naked torso during her transition. The project exposes how silly Instagram’s rules are around allowing images of male nipples while not allowing images of female nipples, and how those rules are arbitrarily enforced.
It was disappointing but not at all surprising that Facebook punished us for sharing that story on its platform. “We removed you photo,” an automated notification from Facebook to the official 404 Media account read. “This goes against our Community Standards on nudity or sexual activity.”
Separately, when Jason tried to share it on his Threads, it removed his post because it included “nudity or sexual activity.” Weirdly, none of the images in the post Jason shared were flagged when Ada Ada Ada uploaded them to Instagram, but they were when Jason shared them on Threads. Threads also removed Joe’s post about a story I wrote about people making AI-generated porn of the Vatican’s new mascot, a story that is about adult content, but that doesn’t contain nude images.
Our official 404 Media page, as well as Jason’s personal account, which he has had for 20 years and which is the “admin” of the 404 Media page, was dinged several times for sharing stories about a bill that would rewrite obscenity standards, the FBI charging a man with cyberstalking, and AI-generated fake images about a natural disaster on Twitter. Facebook has threatened the existence of not just the official 404 Media page, but also of Jason’s personal account.
Not a single one of these stories or the images they include violate Facebook’s policies as they are written, but Facebook nonetheless has limited how many people see these stories and our page in general because we shared them. Facebook has also prevented us from inviting people from liking the page (which presumably would limit its reach also) and warned us that it was “at risk of being suspended,” and later, “unpublished.”
As many sex workers and educators have told us over the years, while Facebook gave us the chance to appeal all of these decisions, trying to correct Facebook’s moderation efforts is not simple, and the “appeals” process consists solely of clicking a few predetermined boxes; there is no chance to interact with a moderator or plead your case. We appealed three of the decisions in late October, none of which were accepted.
The appeal we filed on Ada Ada Ada’s story on the official 404 Media page in mid-December was accepted within a few hours and got the restrictions lifted off of the 404 Media page (and Jason’s personal account) in mid-December. But an appeal Jason filed on his Threads post about the same story was not accepted: “We reviewed your post again. We confirmed that it does not follow our Community Guidelines on nudity or sexual activity,” the appeal determination on Jason’s Threads post read. The different determinations between what was essentially the exact same post shows how all-over-the-place Meta’s moderation remains, which creates an infuriating dynamic for adult content creators. Mark Zuckerberg has personally expressed regret for giving into pressure from the Biden administration to “censor” content during the height of the coronavirus pandemic, but neither he nor Meta has extended an apology to adult content creators who are censored regularly.
It was hard enough to deal with having to constantly prove to Facebook that our journalism is not pornography or harmful content when we worked at VICE, where we had a whole audience and social media team who dealt with this kind of thing. It’s much harder for us to do that now that we’re an independent publication with only four workers who have to do this in addition to everything else. I can’t imagine how demoralizing it would be to have to deal with this as a single adult content creator trying to promote their work on Facebook’s platforms.
Again, this is frustrating as is, but infuriating when I regularly see Facebook not only take money from advertisers that are pushing nudity on Facebook, but doing it for the explicit purpose of creating nonconsensual content or scamming its users.
The silver lining here is that Facebook was already increasingly a waste of our time. The only reason we’re able to share our stories via our official Facebook page is that we’ve fully automated that process, because it is not actually worth our time to post our stories there organically. Since before we started 404 Media, we knew there was very little chance that Facebook would help us reach people, grow our audience, and make the case that people should support our journalism, so in a way we lost nothing because there’s nothing to lose.
On the other hand, that perspective is based on us having already accepted Facebook’s rejection of our journalism years ago. It’s not as if people don’t get any news on Facebook. According to Pew, about a third of adults in the U.S. get news from Facebook, but according to media monitoring tool Newswhip, the top 10 publishers on Facebook are British tabloids, People, Fox News, CNN, and BBC. Smaller publishers, especially publishers who are trying to report on some of the biggest problems that are plaguing Facebook, are punished for pointing out that those problems involve adult content, which disincentivizes that reporting and allows those problems to fester.
I don’t like it, but ultimately the choices Facebook is making here are shaping its platform, and it’s going to be a bigger problem for its users who are going to fall victim to these ads than it is for us as a publisher.
It's very telling that Facebook quickly detects and removes porn when users post it, but seemingly have a much more difficult time detecting the exact same porn when it's posted as part of an advertisement.
It was very sad to see firefighters respond but unable to access water through traditional means. They used pools to the extent they could and were available, but I'm sure the lack of water pressure in the hydrants led to much more damage than there would have been otherwise.