THE Microsoft will not label fake news in the content of their platforms. In an interview with Bloomberg, President Brad Smith stated that, instead of signage, he will focus on “transparency” to avoid the image of censorship on the part of the company. The positioning is the opposite of that of other big techs such as Meta, Twitter and Google.
The statements give direction to the company’s strategy, which owns LinkedIn and Bing, to deal with disinformation. Smith said Microsoft wants to give the public more information about who is speaking and what is being said. Thus, viewers themselves will be able to judge whether that content is true or not.
“Our whole approach needs to be to provide people with more information, not less, and we cannot stumble and use what others might consider censorship as a tactic,” he said. In this case, Smith addresses accusations of censorship of technology companies when they label or take down content with disinformation.
The company’s president also pointed out that it is not the role of big techs to define what disinformation is. “I don’t think people want governments to tell them what’s true and what’s false,” Smith said. “And I don’t think they’re really interested in what tech companies tell them.”
Microsoft focuses on “transparency” to contain fake news
Signage is a strategy adopted by some companies to contain fake news, as is the case with Meta. Microsoft, on the other hand, decided to take a different path to combat misinformation. No wonder, without revealing many details, Smith stressed that the company will have a main goal: “transparency”.
As noted by the portal, the company focuses on investigating disinformation campaigns. This effort comes with the support of the company’s cybersecurity team, which helps gather data on incidents. This information is then shared with governments so that initiatives can be taken.
The approach might be interesting, but it might not work in a timely manner. In 2018, a MIT study revealed that on Twitter, fake news reached people faster than verified and verified information. The research also pointed out that humans, not bots, played a key role in the proliferation of misinformation.
Meanwhile, other tech companies have taken a different path in recent years. The target, for example, banned 1.3 billion fake Facebook accounts between October and December 2020. At the time, the action managed to contain more than 100 covert operations of foreign and domestic influence.
Twitter is another company that has taken a more energetic initiative on the matter. beyond the labelsat the beginning of 2022, the social network released the tool to report fake news in Brazil. Thus, users may report a post with disinformation as “misleading content”.
YouTube went to threatened with being blocked in Russia for removing fake news about COVID-19. The case happened in 2021, after the platform took off the air two German channels managed by a Russian state-owned company. The Google service also removed 14 lives and a video of President Jair Bolsonaro who advocated the use of chloroquine and ivermectin in the treatment of COVID-19.
With information: Bloomberg