Google is making it easier to find search results from Reddit and other forums

Google is making it easier to find search results from Reddit and other forum sites. The search engine is adding a new module that will surface discussions happening on forums across the web for queries that may benefit from crowd-sourced answers.

The “discussions and forums” module will surface relevant posts from sites like Reddit and Quora alongside more traditional search results. It’s not clear exactly how Google is determining what types of searches are best suited to forum posts. The company says the new “forum” results will “appear when you search for something that might benefit from the diverse personal experiences found in online discussions.”

The feature is already rolling out for mobile searches in the United States. Google didn’t specify when it may be available more widely, but said they will consider updates in the future.

Google is also adding a new feature to news-related searches that will make it easier to browse international headlines that are published in languages other than English. With the change, news-related searches will also turn up relevant local coverage translated by Google.

Google is also making it easier to read international news.
Google

The company uses the example of the recent earthquake in Mexico. With the update, search results will also show “news from Mexico,” which will highlight coverage from local outlets originally written in Spanish, but translated into English. Of course, Google Chrome and other browsers are already able to translate web pages. But Google says that by elevating stories from international outlets directly in search will help provide “new global perspectives” on important stories.

The feature, which is labeled as being in beta for now, is expected early next year. It’s starting off with the ability to translate headlines and stories from Spanish, French and German into English, though the beta designation means Google is likely to add more languages over time.

Elon Musk and Twitter are now fighting about Signal messages

Elon Musk’s private messages could once again land him in hot water in his legal fight with Twitter. Lawyers for the two sides once again faced off in Delaware’s Court of Chancery ahead of an October trial that will determine the fate of the deal.

Among the issues raised in the more than three-hour long hearing was Musk’s use of encrypted messaging app Signal. Twitter’s lawyers claim that Musk has been withholding messages sent via the app, citing a screenshot of an exchange between Musk and Jared Birchall, the head of Musk’s family office.

According to Twitter’s lawyers, the message referenced Morgan Stanley and Marc Andreesen as well as “a conversation about EU regulatory approval” of Musk’s deal with Twitter. Twitter’s lawyers said they uncovered a screenshot of the exchange after Musk and Birchall had denied using Signal to talk about the deal. The screenshot showed the message was set to automatically delete.

Lawyers for Twitter also cited “a missing text message” between Musk and Oracle Chairman Larry Ellison, who was set to be a co-investor in the Twitter deal. Musk and Ellison were texting the morning before Musk tweeted that the Twitter deal was “temporarily on hold.” It’s not clear what the significance of the texts are, but Twitter’s lawyers noted that Musk wrote to Ellison saying “interesting times” before arranging a phone call with him.

Twitter’s lawyers are asking the judge in the case, Kathaleen St. J. McCormick, to sanction Musk over his side’s handling of his messages. “We do think that the time has come for the court to issue a severe sanction,” Twitter’s lawyers said during the hearing.

Musk’s side attempted to downplay the significance of the Tesla CEO’s use of Signal. “There actually is no evidence that we destroyed evidence,” one of Musk’s lawyers responded. “Signal, you know, it sounds like it’s a nefarious device,” she said. “In fact, Twitter executives have testified that a number of them actually use Signal messaging.”

Musk’s lawyers cited the existence of Signal messages between Jack Dorsey and board chair Bret Taylor, and noted that current CEO Parag Agrawal has also turned over Signal messages. “Signal is not some exotic mechanism, it’s very common in Silicon Valley to use this platform,” she said.

Notably, the latest hearing is not the first time Twitter’s lawyers have used Musk’s private messages obtained in the legal discovery process in their bid to enforce the original terms of the deal with Musk. Twitter’s lawyers previously called out a text message between Musk and one of his Morgan Stanley bankers in which he cited concerns about “World War 3” as a reason to slow-roll his negotiations with Twitter.

McCormick is expected to rule on Twitter’s motion to sanction Musk in the next couple days. A five-day trial that will determine the fate of the deal is scheduled for October 17th.

Meta dismantles a China-based network of fake accounts ahead of the midterms

Meta has taken down a network of fake accounts from China that targeted the United States with memes and posts about “hot button” political issues ahead of the midterm elections.The company said the fake accounts were discovered before they amassed a large following or attracted meaningful engagement, but that the operation was significant due to its timing and because of the topics the accounts posted about.

The network consisted of 81 Facebook accounts, eight Facebook Pages, two Instagram accounts and a single Facebook Group. Just 20 accounts followed at least one of the Pages and the group had about 250 members, according to Meta.

The fake accounts posted in four different “clusters” of activity, Meta said, beginning with Chinese-language content “about geopolitical issues, criticizing the US.” The next cluster graduated to memes and posts in English, while subsequent clusters created Facebook Pages and hashtags that also circulated on Twitter. In addition to the US, some clusters also targeted posts to people in the Czech Republic.

During a call with reporters, Meta’s Global Threat Intelligence Lead Ben Nimmo said the people behind the accounts “made a number of mistakes” that allowed Meta to catch them more easily, such as only posting during working hours in China. At the same time, Nimmo said the network represented a “new direction for Chinese influence operations” because the accounts posed as both liberals and conservatives, advocating for both sides on issues like gun control and abortion rights.

“It’s like they were using these hot button issues to try and find an entry point into American discourse,” Nimmo said. “It is an important new direction to be aware of.” The accounts also shared memes about President Joe Biden, Florida Senator Marco Rubio, Utah Senator Mitt Romney and House Speaker Nancy Pelosi, according to Meta.

Meta also shared details about a much larger network of fake accounts from Russia, which it described as the “most complex Russian-origin operation that we’ve disrupted since the beginning of the war in Ukraine.” The company identified more than 1,600 Facebook accounts and 700 Facebook Pages associated with the effort, which drew more than 5,000 followers.

The network used the accounts to boost a series of fake websites that impersonated legitimate news outlets and European organizations. They targeted people in Germany, France, Italy, Ukraine and the United Kingdom, and posted in several languages.

“They would post original articles that criticized Ukraine and Ukrainian refugees, praised Russia and argued that Western sanctions on Russia would backfire,” Meta writes in its report. “They would then promote these articles and also original memes and YouTube videos across many internet services, including Facebook, Instagram, Telegram, Twitter, petitions websites Change[.]org and Avaaz[.]com, and even LiveJournal.”

Meta notes that “on a few occasions” the posts from these fake accounts were “amplified by Russian embassies in Europe and Asia” though it didn’t find direct links between the embassy accounts and the network. For both the Russia and China-based networks, Meta said it was unable to attribute the fake accounts to specific individuals or groups within the countries.

The takedowns come as Meta and itspeers are ramping up security and anti-misinformation efforts to prepare for the midterm elections in the fall. For Meta, that means largely using the same strategy it employed in the 2020 presidential election: a combination of highlighting authoritative information and resources, while relying on labels and third-party fact checkers to tamp down false and unverified info.

Facebook violated Palestinians’ right to free expression, says report commissioned by Meta

Meta has finally released the findings of an outside report that examined how its content moderation policies affected Israelis and Palestinians amid an escalation of violence in the Gaza Strip last May. The report, from Business for Social Responsibility (BSR), found that Facebook and Instagram violated Palestinians’ right to free expression.

“Based on the data reviewed, examination of individual cases and related materials, and external stakeholder engagement, Meta’s actions in May 2021 appear to have had an adverse human rights impact on the rights of Palestinian users to freedom of expression, freedom of assembly, political participation, and non-discrimination, and therefore on the ability of Palestinians to share information and insights about their experiences as they occurred,” BSR writes in its report.

The report also notes that “an examination of individual cases” showed that some Israeli accounts were also erroneously banned or restricted during this period. But the report’s authors highlight several systemic issues they say disproportionately affected Palestinians.

According to the report, “Arabic content had greater over-enforcement,” and “proactive detection rates of potentially violating Arabic content were significantly higher than proactive detection rates of potentially violating Hebrew content.” The report also notes that Meta had an internal tool for detecting “hostile speech” in Arabic, but not in Hebrew, and that Meta’s systems and moderators had lower accuracy when assessing Palestinian Arabic.

As a result, many users’ accounts were hit with “false strikes,” and wrongly had posts removed by Facebook and Instagram. “These strikes remain in place for those users that did not appeal erroneous content removals,” the report notes.

Meta had commissioned the report following a recommendation from the Oversight Board last fall. In a response to the report, Meta says it will update some of its policies, including several aspects of its Dangerous Individuals and Organizations (DOI) policy. The company says it’s “started a policy development process to review our definitions of praise, support and representation in our DOI Policy,” and that it’s “working on ways to make user experiences of our DOI strikes simpler and more transparent.”

Meta also notes it has “begun experimentation on building a dialect-specific Arabic classifier” for written content, and that it has changed its internal process for managing keywords and “block lists” that affect content removals.

Notably, Meta says it’s “assessing the feasibility” of a recommendation that it notify users when it places “feature limiting and search limiting” on users’ accounts after they receive a strike. Instagram users have long complained that the app shadowbans or reduces the visibility of their account when they post about certain topics. These complaints increased last spring when users reported that they were barred from posting about Palestine, or that the reach of their posts was diminished. At the time, Meta blamed an unspecified “glitch.” BSR’s report notes that the company had also implemented emergency “break glass” measures that temporarily throttled all “repeatedly reshared content.”

Twitter is logging out some users following password reset ‘incident’

Twitter has disclosed an “incident” affecting the accounts of an unspecified number of users who opted to reset their passwords. According to the company, a “bug” introduced sometime in the last year prevented Twitter users from being logged out of their accounts on all of their devices after initiating a password reset.

“if you proactively changed your password on one device, but still had an open session on another device, that session may not have been closed,” Twitter explains in a brief blog post. “Web sessions were not affected and were closed appropriately.”

Twitter says it is “proactively” logging some users out as a result of the bug. The company attributed the issue to “a change to the systems that power password resets” that occurred at some point in 2021. A Twitter spokesperson declined to elaborate on when this change was made or exactly how many users are affected. “I can share that for most people, this wouldn’t have led to any harm or account compromise,” the spokesperson said. 

While Twitter states that “most people” wouldn’t have had their accounts compromised as a result, the news could be worrying for those who have used shared devices, or dealt with a lost or stolen device in the last year.

Notably, Twitter’s disclosure of the incident comes as the company is reeling from allegations from its former head of security who had filed a whistleblower complaint accusing the company of “grossly negligent” security practices. Twitter has so far declined to address the claims in detail, citing its ongoing litigation with Elon Musk. Musk is using the whistleblower allegations in his legal case to get out of his $44 billion deal to buy Twitter.

YouTube’s ‘dislike’ barely works, according to new study on recommendations

If you’ve ever felt like it’s difficult to “un-train” YouTube’s algorithm from suggesting a certain type of video once it slips into your recommendations, you’re not alone. In fact, it may be even more difficult than you think to get YouTube to accurately understand your preferences. One major issue, according to new research conducted by Mozilla, is that YouTube’s in-app controls such as the “dislike” button, are largely ineffective as a tool for controlling suggested content. According to the report, these buttons “prevent less than half of unwanted algorithmic recommendations.”

Researchers at Mozilla used data gathered from RegretsReporter, its browser extension that allows people to “donate” their recommendations data for use in studies like this one. In all, the report relied on millions of recommended videos, as well as anecdotal reports from thousands of people.

Mozilla tested the effectiveness of four different controls: the thumbs down “dislike” button, “not interested,” “don’t recommend channel” and “remove from watch history.” The researchers found that these had varying degrees of effectiveness, but that the overall impact was “small and inadequate.”

Of the four controls, the most effective was “don’t recommend from channel,” which prevented 43 percent of unwanted recommendations, while “not interested” was the least effective and only prevented about 11 percent of unwanted suggestions. The “dislike” button was nearly the same at 12 percent, and “remove from watch history” weeded out about 29 percent.

In their report, Mozilla’s researchers noted the great lengths study participants said they would sometimes go to in order to prevent unwanted recommendations, such as watching videos while logged out or while connected to a VPN. The researchers say the study highlights the need for YouTube to better explain its controls to users, and to give people more proactive ways of defining what they want to see.

“The way that YouTube and a lot of platforms operate is they rely a lot of passive data collection in order to infer what your preferences are,” says Becca Ricks, a senior researcher at Mozilla who co-authored the report. “But it’s a little bit of a paternalistic way to operate where you’re kind of making choices on behalf of people. You could be asking people what they want to be doing on the platform versus just watching what they’re doing.”

Mozilla’s research comes amid increased calls for major platforms to make their algorithms more transparent. In the United States, lawmakers have proposed bills to scale back “opaque” recommendation algorithms and to hold companies accountable for algorithmic bias. The European Union is even farther ahead. The recently passed Digital Services Act will require platforms to explain how recommendation algorithms work and open them to outside researchers.