Jews at work: Facebook’s staff read encrypted WhatsApp messages, shares info with prosecutors – reports

(005320.38-:E-003569.93:N-HO:R-SU:C-30:V)   


Jan‘s Advertisement
Video: S.Africa: Untold Story: When Whites tried to build tens of millions of houses for Blacks
In this video I show some private photos that I have and I tell the story of how many times Whites set out to build millions of houses for Blacks in South Africa even under Apartheid, and how the ANC ruling party was spiteful and how they DESTROYED the efforts of the Whites.


[Paranoid Jews spying as always. The Jews also are spying on Telegram. Jan]

When Facebook acquired WhatsApp, it promised to respect the privacy of its users. That hasn’t been the case, and the firm now employs thousands of staff to read supposedly-encrypted chats.

Social media behemoth Facebook acquired WhatsApp in 2014, with CEO Mark Zuckerberg promising to keep the stripped-down, ad-free messaging app “exactly the same.” End-to-end encryption was introduced in 2016, with the app itself offering on-screen assurances to users that “No one outside of this chat” can read their communications, and Zuckerberg himself telling the US Senate in 2018 that “We don’t see any of the content in WhatsApp.”
Also on rt.com Irish data privacy watchdog dishes out record €225mn fine to WhatsApp

Allegedly, none of that is true. More than a thousand content moderators are employed at shared Facebook/WhatsApp offices in Austin, Texas, Dublin, Ireland, and Singapore to sift through messages reported by users and flagged by artificial intelligence.

Based on internal documents, interviews with moderators, and a whistleblower complaint, ProPublica explained how the system works in a lengthy investigation published on Wednesday.

When a user presses ‘report’ on a message, the message itself plus the preceding four messages in the chat are unscrambled and sent to one of these moderators for review. Moderators also examine messages picked out by artificial intelligence, based on unencrypted data collected by WhatsApp. The data collected by the app is extensive, and includes:

“The names and profile images of a user’s WhatsApp groups as well as their phone number, profile photo, status message, phone battery level, language and time zone, unique mobile phone ID and IP address, wireless signal strength and phone operating system, as a list of their electronic devices, any related Facebook and Instagram accounts, the last time they used the app and any previous history of violations.”

These moderators are not employees of WhatsApp or Facebook. Instead they are contractors working for $16.50 per hour, hired by consulting firm Accenture. These workers are bound to silence by nondisclosure agreements, and their hiring went unannounced by Facebook.

Likewise, the actions of these moderators go unreported. Facebook releases quarterly ‘transparency reports’ for its own platform and subsidiary Instagram, detailing how many accounts were banned or otherwise disciplined and for what, but does not do this for WhatsApp.

Many of the messages reviewed by moderators are flagged in error. WhatsApp has two billion users who speak hundreds of languages, and staff sometimes have to rely on Facebook’s translation tool to analyze flagged messages, which one employee said is “horrible” at decoding local slang and political content.

Aside from false reports submitted as pranks, moderators have to analyze perfectly innocent content highlighted by AI. Companies using the app to sell straight-edge razors have been flagged as selling weapons. Parents photographing their bathing children have been flagged for child porn, and lingerie companies have been flagged as forbidden “sexually oriented business[es].”

“A lot of the time, the artificial intelligence is not that intelligent,” one moderator told ProPublica.

WhatsApp acknowledged that it analyzes messages to weed out “the worst” abusers, but doesn’t call this “content moderation.”

“We actually don’t typically use the term for WhatsApp,” Director of Communications Carl Woog told ProPublica. “The decisions we make around how we build our app are focused around the privacy of our users, maintaining a high degree of reliability and preventing abuse.”

Facebook has lied about its commitment to user privacy before. Two years after Zuckerberg assured users that his company would keep WhatsApp ad-free and let the company “operate completely autonomously,” he revealed plans to link WhatsApp accounts to Facebook for the purposes of ad targeting. This move earned Facebook a $122 million fine from EU antitrust regulators, who said the Facebook CEO had “intentionally or negligently” deceived them.

Despite Zuckerberg’s assurances of privacy, WhatsApp shares more user metadata (data that can identify a user without the content of their messages) with law enforcement than rival messaging services from Apple and Signal. This metadata, which can reveal phone numbers, location, timestamps, and more, is valuable to law enforcement and intelligence agencies, with NSA whistleblower Edward Snowden’s 2013 leaks revealing a large-scale operation by the agency to capture the metadata of millions of Americans’ communications.

“Metadata absolutely tells you everything about somebody’s life,” former NSA General Counsel Stewart Baker once said. “If you have enough metadata, you don’t really need content.”

Across all of its platforms, Facebook complies with 95% of requests for metadata. While it is unknown what law enforcement has been able to glean from WhatsApp metadata, the US Department of Justice has requested this metadata more than a dozen times since 2017 and likely far more frequently, given many of these requests are not made public. WhatsApp metadata has been used to jail Natalie Edwards, a former Treasury Department official who leaked confidential banking reports about suspicious transactions to BuzzFeed News.

Inside WhatsApp, the company stresses the importance of promoting itself as a privacy-focused operation. A marketing document obtained by ProPublica states that WhatsApp should portray itself as “courageous,” taking a “strong, public stance that is not financially motivated on things we care about,” such as defending encryption and user privacy.

However, another line in the same document states that “future business objectives” mean that “while privacy will remain important, we must accommodate for future innovations.”

Source: https://www.rt.com/news/534257-facebook-whatsapp-moderators-read-messages/?utm_source=Newsletter&utm_medium=Email&utm_campaign=Email



Jan‘s Advertisement
Video: No War has EVER been fought over Right Vs Wrong
Everybody I come across thinks that wars are fought for moral reasons. They think wars are fought over the issue of Right versus Wrong or Good versus Bad. At first this shocked me and I realised that indeed everyone has been conditioned to think like this.

%d bloggers like this:
Skip to toolbar