Further details have emerged about when and how much Facebook knew about data-scraping by the disgraced and now defunct Cambridge Analytica political data firm.
Last year a major privacy scandal hit Facebook after it emerged CA had paid GSR, a developer with access to Facebook’s platform, to extract personal data on as many as 87M Facebook users without proper consents.
Cambridge Analytica’s intention was to use the data to build psychographic profiles of American voters to target political messages — with the company initially working for the Ted Cruz and later the Donald Trump presidential candidate campaigns.
But employees at Facebook appear to have raised internal concerns about CA scraping user data in September 2015 — i.e. months earlier than Facebook previously told lawmakers it became aware of the GSR/CA breach (December 2015).
The latest twist in the privacy scandal has emerged via a redacted court filing in the U.S. — where the District of Columbia is suing Facebook in a consumer protection enforcement case.
Facebook is seeking to have documents pertaining to the case sealed, while the District argues there is nothing commercially sensitive to require that.
In its opposition to Facebook’s motion to seal the document, the District includes a redacted summary (screengrabbed below) of the “jurisdictional facts” it says are contained in the papers Facebook is seeking to keep secret.
According to the District’s account a Washington D.C.-based Facebook employee warned others in the company about Cambridge Analytica’s data-scraping practices as early as September 2015.
Under questioning in Congress last April, Mark Zuckerberg was asked directly by congressman Mike Doyle when Facebook had first learned about Cambridge Analytica using Facebook data — and whether specifically it had learned about it as a result of the December 2015 Guardian article (which broke the story).
Zuckerberg responded with a “yes” to Doyle’s question.
Damian Collins, the chair of the DCMS committee — which made repeat requests for Zuckerberg himself to testify in front of its enquiry into online disinformation, only to be repeatedly rebuffed — tweeted yesterday that the new detail could suggest Facebook “consistently mislead” the British parliament.
The DCMS committee has previously accused Facebook of deliberately misleading its enquiry on other aspects of the CA saga, with Collins taking the company to task for displaying a pattern of evasive behavior.
The earlier charge that it mislead the committee refers to a hearing in Washington in February 2018 — when Facebook sent its UK head of policy, Simon Milner, and its head of global policy management, Monika Bickert, to field DCMS’ questions — where the pair failed to inform the committee about a legal agreement Facebook had made with Cambridge Analytica in December 2015.
The committee’s final report was also damning of Facebook, calling for regulators to instigate antitrust and privacy probes of the tech giant.
Meanwhile, questions have continued to be raised about Facebook’s decision to hire GSR co-founder Joseph Chancellor, who reportedly joined the company around November 2015.
The question now is if Facebook knew there were concerns about CA data-scraping prior to hiring the co-founder of the company that sold scraped Facebook user data to CA, why did it go ahead and hire Chancellor?
The GSR co-founder has never been made available by Facebook to answer questions from politicians (or press) on either side of the pond.
Last fall he was reported to have quietly left Facebook, with no comment from Facebook on the reasons behind his departure — just as it had never explained why it hired him in the first place.
But the new timeline that’s emerged of what Facebook knew when makes those questions more pressing than ever.
Reached for a response to the details contained in the District of Columbia’s court filing, a Facebook spokeswomen sent us this statement:
Facebook was not aware of the transfer of data from Kogan/GSR to Cambridge Analytica until December 2015, as we have testified under oath
In September 2015 employees heard speculation that Cambridge Analytica was scraping data, something that is unfortunately common for any internet service. In December 2015, we first learned through media reports that Kogan sold data to Cambridge Analytica, and we took action. Those were two different things.
Facebook did not engage with questions about any of the details and allegations in the court filing.
A little later in the court filing, the District of Columbia writes that the documents Facebook is seeking to seal are “consistent” with its allegations that “Facebook has employees embedded within multiple presidential candidate campaigns who… knew, or should have known… [that] Cambridge Analytica [was] using the Facebook consumer data harvested by [[GSR’s]] [Aleksandr] Kogan throughout the 2016 [United States presidential] election.”
It goes on to suggest that Facebook’s concern to seal the document is “reputational”, suggesting — in another redacted segment (below) — that it might “reflect poorly” on Facebook that a DC-based employee had flagged Cambridge Analytica months prior to news reports of its improper access to user data.
“The company may also seek to avoid publishing its employees’ candid assessments of how multiple third-parties violated Facebook’s policies,” it adds, chiming with arguments made last year by GSR’s Kogan who suggested the company failed to enforce the terms of its developer policy, telling the DCMS committee it therefore didn’t have a “valid” policy.
As we’ve reported previously, the UK’s data protection watchdog — which has an ongoing investigation into CA’s use of Facebook data — was passed information by Facebook as part of that probe which showed that three “senior managers” had been involved in email exchanges, prior to December 2015, concerning the CA breach.
It’s not clear whether these exchanges are the same correspondence the District of Columbia has obtained and which Facebook is seeking to seal. Or whether there were multiple email threads raising concerns about the company.
The ICO passed the correspondence it obtained from Facebook to the DCMS committee — which last month said it had agreed at the request of the watchdog to keep the names of the managers confidential. (The ICO also declined to disclose the names or the correspondence when we made a Freedom of Information request last month — citing rules against disclosing personal data and its ongoing investigation into CA meaning the risk of release might be prejudicial to its investigation.)
In its final report the committee said this internal correspondence indicated “profound failure of governance within Facebook” — writing:
[I]t would seem that this important information was not shared with the most senior executives at Facebook, leading us to ask why this was the case. The scale and importance of the GSR/Cambridge Analytica breach was such that its occurrence should have been referred to Mark Zuckerberg as its CEO immediately. The fact that it was not is evidence that Facebook did not treat the breach with the seriousness it merited. It was a profound failure of governance within Facebook that its CEO did not know what was going on, the company now maintains, until the issue became public to us all in 2018. The incident displays the fundamental weakness of Facebook in managing its responsibilities to the people whose data is used for its own commercial interests.
We reached out to the ICO for comment on the information to emerge via the Columbia suit, and also to the Irish Data Protection Commission, the lead DPA for Facebook’s international business, which currently has 15 open investigations into Facebook or Facebook-owned businesses related to various security, privacy and data protection issues.
Last year the ICO issued Facebook with the maximum possible fine under UK law for the CA data breach.
Shortly after Facebook announced it would appeal, saying the watchdog had not found evidence that any UK users’ data was misused by CA.
A date for the hearing of the appeal set for earlier this week was canceled without explanation. A spokeswoman for the tribunal court told us a new date would appear on its website in due course.[ + ]
Facebook has given another update on measures it took and what more it’s doing in the wake of the livestreamed video of a gun massacre by a far right terrorist who killed 50 people in two mosques in Christchurch, New Zealand.
Earlier this week the company said the video of the slayings had been viewed less than 200 times during the livestream broadcast itself, and about about 4,000 times before it was removed from Facebook — with the stream not reported to Facebook until 12 minutes after it had ended.
None of the users who watched the killings unfold on the company’s platform in real-time apparently reported the stream to the company, according to the company.
It also previously said it removed 1.5 million versions of the video from its site in the first 24 hours after the livestream, with 1.2M of those caught at the point of upload — meaning it failed to stop 300,000 uploads at that point. Though as we pointed out in our earlier report those stats are cherrypicked — and only represent the videos Facebook identified. We found other versions of the video still circulating on its platform 12 hours later.
In the wake of the livestreamed terror attack, Facebook has continued to face calls from world leaders to do more to make sure such content cannot be distributed by its platform.
The prime minister of New Zealand, Jacinda Ardern told media yesterday that the video “should not be distributed, available, able to be viewed”, dubbing it: “Horrendous.”
She confirmed Facebook had been in contact with her government but emphasized that in her view the company has not done enough.
She also later told the New Zealand parliament: “We cannot simply sit back and accept that these platforms just exist and that what is said on them is not the responsibility of the place where they are published. They are the publisher. Not just the postman.”
We asked Facebook for a response to Ardern’s call for online content platforms to accept publisher-level responsibility for the content they distribute. Its spokesman avoided the question — pointing instead to its latest piece of crisis PR which it titles: “A Further Update on New Zealand Terrorist Attack”.
Here it writes that “people are looking to understand how online platforms such as Facebook were used to circulate horrific videos of the terrorist attack”, saying it therefore “wanted to provide additional information from our review into how our products were used and how we can improve going forward”, before going on to reiterate many of the details it has previously put out.
Including that the massacre video was quickly shared to the 8chan message board by a user posting a link to a copy of the video on a file-sharing site. This was prior to Facebook itself being alerted to the video being broadcast on its platform.
It goes on to imply 8chan was a hub for broader sharing of the video — claiming that: “Forensic identifiers on many of the videos later circulated, such as a bookmarks toolbar visible in a screen recording, match the content posted to 8chan.”
So it’s clearly trying to make sure it’s not singled out by political leaders seek policy responses to the challenge posed by online hate and terrorist content.
Further details it chooses to dwell on in the update is how the AIs it uses to aid the human content review process of flagged Facebook Live streams are in fact tuned to “detect and prioritize videos that are likely to contain suicidal or harmful acts” — with the AI pushing such videos to the top of human moderators’ content heaps, above all the other stuff they also need to look at.
Clearly “harmful acts” were involved in the New Zealand terrorist attack. Yet Facebook’s AI was unable to detected a massacre unfolding in real time. A mass killing involving an automatic weapon slipped right under the robot’s radar.
Facebook explains this by saying it’s because it does not have the training data to create an algorithm that understands it’s looking at mass murder unfolding in real time.
It also implies the task of training an AI to catch such a horrific scenario is exacerbated by the proliferation of videos of first person shooter videogames on online content platforms.
It writes: “[T]his particular video did not trigger our automatic detection systems. To achieve that we will need to provide our systems with large volumes of data of this specific kind of content, something which is difficult as these events are thankfully rare. Another challenge is to automatically discern this content from visually similar, innocuous content – for example if thousands of videos from live-streamed video games are flagged by our systems, our reviewers could miss the important real-world videos where we could alert first responders to get help on the ground.”
The videogame element is a chilling detail to consider.
It suggests that a harmful real-life act that mimics a violent video game might just blend into the background, as far as AI moderation systems are concerned; invisible in a sea of innocuous, virtually violent content churned out by gamers. (Which in turn makes you wonder whether the Internet-steeped killer in Christchurch knew — or suspected — that filming the attack from a videogame-esque first person shooter perspective might offer a workaround to dupe Facebook’s imperfect AI watchdogs.)
Facebook post is doubly emphatic that AI is “not perfect” and is “never going to be perfect”.
“People will continue to be part of the equation, whether it’s the people on our team who review content, or people who use our services and report content to us,” it writes, reiterating yet again that it has ~30,000 people working in “safety and security”, about half of whom are doing the sweating hideous toil of content review.
This is, as we’ve said many times before, a fantastically tiny number of human moderators given the vast scale of content continually uploaded to Facebook’s 2.2BN+ user platform.
Moderating Facebook remains a hopeless task because so few humans are doing it.
Moreover AI can’t really help. (Later in the blog post Facebook also writes vaguely that there are “millions” of livestreams broadcast on its platform every day, saying that’s why adding a short broadcast delay — such as TV stations do — wouldn’t at all help catch inappropriate real-time content.)
At the same time Facebook’s update makes it clear how much its ‘safety and security’ systems rely on unpaid humans too: Aka Facebook users taking the time and mind to report harmful content.
Some might say that’s an excellent argument for a social media tax.
The fact Facebook did not get a single report of the Christchurch massacre livestream while the terrorist attack unfolded meant the content was not prioritized for “accelerated review” by its systems, which it explains prioritize reports attached to videos that are still being streamed — because “if there is real-world harm we have a better chance to alert first responders and try to get help on the ground”.
Though it also says it expanded its acceleration logic last year to “also cover videos that were very recently live, in the past few hours”.
But again it did so with a focus on suicide prevention — meaning the Christchurch video would only have been flagged for acceleration review in the hours after the stream ended if it had been reported as suicide content.
So the ‘problem’ is that Facebook’s systems don’t prioritize mass murder.
“In [the first] report, and a number of subsequent reports, the video was reported for reasons other than suicide and as such it was handled according to different procedures,” it writes, adding it’s “learning from this” and “re-examining our reporting logic and experiences for both live and recently live videos in order to expand the categories that would get to accelerated review”.
Facebook also discusses its failure to stop versions of the massacre video from resurfacing on its platform, having been — as it tells it — “so effective” at preventing the spread of propaganda from terrorist organizations like ISIS with the use of image and video matching tech.
It claims its tech was outfoxed in this case by “bad actors” creating many different edited versions of the video to try to thwart filters, as well as by the various ways “a broader set of people distributed the video and unintentionally made it harder to match copies”.
So, essentially, the ‘virality’ of the awful event created too many versions of the video for Facebook’s matching tech to cope.
“Some people may have seen the video on a computer or TV, filmed that with a phone and sent it to a friend. Still others may have watched the video on their computer, recorded their screen and passed that on. Websites and pages, eager to get attention from people seeking out the video, re-cut and re-recorded the video into various formats,” it writes, in what reads like another attempt to spread blame for the amplification role that its 2.2BN+ user platform plays.
In all Facebook says it found and blocked more than 800 visually-distinct variants of the video that were circulating on its platform.
It reveals it resorted to using audio matching technology to try to detect videos that had been visually altered but had the same soundtrack. And again claims it’s trying to learn and come up with better techniques for blocking content that’s being re-shared widely by individuals as well as being rebroadcast by mainstream media. So any kind of major news event, basically.
In a section on next steps Facebook says improving its matching technology to prevent the spread of inappropriate viral videos being spread is its priority.
But audio matching clearly won’t help if malicious re-sharers just both re-edit the visuals and switch the soundtrack too in future.
It also concedes it needs to be able to react faster “to this kind of content on a live streamed video” — though it has no firm fixes to offer there either, saying only that it will explore “whether and how AI can be used for these cases, and how to get to user reports faster”.
Another priority it claims among its “next steps” is fighting “hate speech of all kinds on our platform”, saying this includes more than 200 white supremacist organizations globally “whose content we are removing through proactive detection technology”.
It’s glossing over plenty of criticism on that front too though — including research that suggests banned far right hate preachers are easily able to evade detection on its platform. Plus its own foot-dragging on shutting down far right extremists. (Facebook only finally banned one infamous UK far right activist last month, for example.)
In its last PR sop, Facebook says it’s committed to expanding its industry collaboration to tackle hate speech via the Global Internet Forum to Counter Terrorism (GIFCT), which formed in 2017 as platforms were being squeezed by politicians to scrub ISIS content — in a collective attempt to stave off tighter regulation.
“We are experimenting with sharing URLs systematically rather than just content hashes, are working to address the range of terrorists and violent extremists operating online, and intend to refine and improve our ability to collaborate in a crisis,” Facebook writes now, offering more vague experiments as politicians call for content responsibility.[ + ]
Flip the “days since last Facebook security incident” back to zero.
The discovery was made in January, said Facebook’s Pedro Canahuati, as part of a routine security review. None of the passwords were visible to anyone outside Facebook, he said. Facebook admitted the security lapse months later, after Krebs said logs were accessible to some 2,000 engineers and developers.
Krebs said the bug dated back to 2012.
“This caught our attention because our login systems are designed to mask passwords using techniques that make them unreadable,” said Canahuati. “We have found no evidence to date that anyone internally abused or improperly accessed them,” but did not say how the company made that conclusion.
Facebook said it will notify “hundreds of millions of Facebook Lite users,” a lighter version of Facebook for users where internet speeds are slow and bandwidth is expensive, and “tens of millions of other Facebook users.” The company also said “tens of thousands of Instagram users” will be notified of the exposure.
Krebs said as many as 600 million users could be affected — about one-fifth of the company’s 2.7 billion users, but Facebook has yet to confirm the figure.
Facebook also didn’t say how the bug came to be. Storing passwords in readable plaintext is an insecure way of storing passwords. Companies, like Facebook, hash and salt passwords — two ways of further scrambling passwords — to store passwords securely. That allows companies to verify a user’s password without knowing what it is.
It’s the latest in a string of embarrassing security issues at the company, prompting congressional inquiries and government investigations. It was reported last week that Facebook’s deals that allowed other tech companies to access account data without consent was under criminal investigation.
It’s not known why Facebook took months to confirm the incident, or if the company informed state or international regulators per U.S. breach notification and European data protection laws. We asked Facebook but a spokesperson did not immediately comment beyond the blog post.
The Irish data protection office, which covers Facebook’s European operations, said the company “informed us of this issue” and the regulator is “currently seeking further information.”[ + ]
Six months ago, Roman, the cloud pharmacy for erectile dysfunction, dropped the “man” to become “Ro.” At the same time, the company raised a monstrous Series A funding round of $88 million and unveiled Zero, a product meant to help people quit smoking, a root cause of ED.
Now, Ro is addressing a different demographic. Today, it announces Rory, a line of products for menopausal women. In total, Rory offers six products treating four conditions, with prices starting at $13 per month. Prescription medication and supplements for hot flashes, over-the-counter treatments for insomnia, prescription vaginal estrogen cream and an all-natural water-based lubricant for vaginal dryness and Latisse, which helps grow eyelashes, are available for purchase and direct-to-consumer delivery.
All of these conditions, including hair loss and insomnia, can be associated with menopause, or the process, typically at midlife, in which a woman stops menstruating.
To use Rory, which launches in 47 states today, women must complete an online doctor’s visit before they can be prescribed a personalized treatment plan. Rory is also launching a Facebook group and an online community, called Roar, for menopausal women to provide support to one another and to discuss topics from sex positions that help with vaginal dryness to how to sleep better at night.
“We aren’t used to talking about issues like vaginal dryness,” Rory co-founder Rachel Blank told TechCrunch. “Right now, we have [millions] of women experiencing menopause. They are walking around and frankly, their vagina hurts and they are uncomfortable. Really, what we are building at Rory is a lot of the educational content around this to let women know they have choices and they can take control during this phase of life where they feel like their bodies are rebelling against them.”
Unsurprisingly, the Ro founders are all male. In order to launch Rory, the trio — Rob Schutz, Saman Rahmanian and Zachariah Reitano — had to bring on talent knowledgeable of women’s health. Rachel Blank, a former investor at General Catalyst, an investor in Ro, seemed like a natural choice. Blank joined Ro full-time in the fall after learning about the company’s long-term vision to create personalized healthcare for everyone. General Catalyst, for its part, had been an investor in Ro since its August 2017 seed round.
“I was watching their pitch and having had that experience myself and listening to the founders of Ro talk about how much of a difference this platform could make in the lives of men with stigmatized conditions, it really resonated with me that this could really be a powerful tool for women as well,” Blank said.
Blank herself was diagnosed with polycystic ovary syndrome, a hormonal disorder that can cause the development of a number of cysts in the ovaries, at 21 years old. She is joined by Rory clinical director Melynda Barnes, a surgeon and otolaryngologist, and Ro co-founder and chief executive officer Zachariah Reitano, who oversees Ro’s growing portfolio of spin-out brands.
Ro has raised just over $90 million in venture capital funding to date, hitting a valuation of $154 million with its Series A, according to PitchBook. Its investors include Initialized Capital, Box Group and Slow Ventures, as well as angels like Y Combinator partner Aaron Harris, Benchmark’s Scott Belsky and the chief executives of Casper, Code Academy and Pill Pack.
Wed, 20 Mar 2019 11:30:36 +0000
Erectile pharmacy app Ro launches telehealth service for women
Over the past several years, the law enforcement community has grown increasingly concerned about the conduct of digital investigations as technology providers enhance the security protections of their offerings—what some of my former colleagues refer to as “going dark.”
Data once readily accessible to law enforcement is now encrypted, protecting consumers’ data from hackers and criminals. However, these efforts have also had what Android’s security chief called the “unintended side effect” of also making this data inaccessible to law enforcement. Consequently, many in the law enforcement community want the ability to compel providers to allow them to bypass these protections, often citing physical and national security concerns.
I know first-hand the challenges facing law enforcement, but these concerns must be addressed in a broader security context, one that takes into consideration the privacy and security needs of industry and our citizens in addition to those raised by law enforcement.
Perhaps the best example of the law enforcement community’s preferred solution is Australia’s recently passed Assistance and Access Bill, an overly-broad law that allows Australian authorities to compel service providers, such as Google and Facebook, to re-engineer their products and bypass encryption protections to allow law enforcement to access customer data.
While the bill includes limited restrictions on law enforcement requests, the vague definitions and concentrated authorities give the Australian government sweeping powers that ultimately undermine the security and privacy of the very citizens they aim to protect. Major tech companies, such as Apple and Facebook, agree and have been working to resist the Australian legislation and a similar bill in the UK.
Newly created encryption backdoors and work-arounds will become the target of criminals, hackers, and hostile nation states, offering new opportunities for data compromise and attack through the newly created tools and the flawed code that inevitably accompanies some of them. These vulnerabilities undermine providers’ efforts to secure their customers’ data, creating new and powerful vulnerabilities even as companies struggle to address existing ones.
And these vulnerabilities would not only impact private citizens, but governments as well, including services and devices used by the law enforcement and national security communities. This comes amidst government efforts to significantly increase corporate responsibility for the security of customer data through laws such as the EU’s General Data Protection Regulation. Who will consumers, or the government, blame when a government-mandated backdoor is used by hackers to compromise user data? Who will be responsible for the damage?
Companies have a fiduciary responsibility to protect their customers’ data, which not only includes personally identifiable information (PII), but their intellectual property, financial data, and national security secrets.
Worse, the vulnerabilities created under laws such as the Assistance and Access Bill would be subject almost exclusively to the decisions of law enforcement authorities, leaving companies unable to make their own decisions about the security of their products. How can we expect a company to protect customer data when their most fundamental security decisions are out of their hands?
Thus far law enforcement has chosen to downplay, if not ignore, these concerns—focusing singularly on getting the information they need. This is understandable—a law enforcement officer should use every power available to them to solve a case, just as I did when I served as a State Trooper and as a FBI Special Agent, including when I served as Executive Assistant Director (EAD) overseeing the San Bernardino terror attack case during my final months in 2015.
Decisions regarding these types of sweeping powers should not and cannot be left solely to law enforcement. It is up to the private sector, and our government, to weigh competing security and privacy interests. Our government cannot sacrifice the ability of companies and citizens to properly secure their data and systems’ security in the name of often vague physical and national security concerns, especially when there are other ways to remedy the concerns of law enforcement.
That said, these security responsibilities cut both ways. Recent data breaches demonstrate that many companies have a long way to go to adequately protect their customers’ data. Companies cannot reasonably cry foul over the negative security impacts of proposed law enforcement data access while continuing to neglect and undermine the security of their own users’ data.
Providers and the law enforcement community should be held to robust security standards that ensure the security of our citizens and their data—we need legal restrictions on how government accesses private data and on how private companies collect and use the same data.
There may not be an easy answer to the “going dark” issue, but it is time for all of us, in government and the private sector, to understand that enhanced data security through properly implemented encryption and data use policies is in everyone’s best interest.
The “extra ordinary” access sought by law enforcement cannot exist in a vacuum—it will have far reaching and significant impacts well beyond the narrow confines of a single investigation. It is time for a serious conversation between law enforcement and the private sector to recognize that their security interests are two sides of the same coin.[ + ]
When the newly-minted chair of a congressional antitrust committee calls you out, it’s probably time to start worrying.
In an op-ed for the New York Times, Rhode Island Representative David N. Cicilline has called on the Federal Trade Commission to look into Facebook’s behavior for potential antitrust violations, citing TechCrunch’s own reporting that the company collected data on teens through a secret paid program among many other scandals.
“After each misdeed becomes public, Facebook alternates between denial, hollow promises and apology campaigns,” Cicilline wrote. “But nothing changes. That’s why, as chairman of the House Subcommittee on Antitrust, Commercial and Administrative Law, I am calling for an investigation into whether Facebook’s conduct has violated antitrust laws.”
Cicilline’s op-ed intends to put pressure on the FTC, a useful regulatory arm that he accuses of “facing a massive credibility crisis” due to its inaction to date against Facebook. And while the FTC is the focus of Cicilline’s call to action, the op-ed provides an insightful glimpse into what Facebook actions are salient for the lawmaker that Bloomberg called “the most powerful person in tech” when he became the ranking member of the House Judiciary’s Subcommittee on Antitrust, Commercial and Administrative Law this year.
That committee, now led by a Democratic party increasingly interested in breaking up big tech as a platform pillar, is a potentially powerful mechanism for antitrust action against the monopolistic power brokers that dominate the Silicon Valley we’ve come to know.
“For years, privacy advocates have alerted the commission that Facebook was likely violating its commitments under the agreement. Not only did the commission fail to enforce its order, but by failing to block Facebook’s acquisition of WhatsApp and Instagram, it enabled Facebook to extend its dominance,” Cicilline wrote, noting a fine must be multiple billions of dollars to impact the massive company at all. As we reported last month, the FTC is reportedly looking at a potentially multi-billion dollar fine but such a costly reprimand has yet to materialize.
The lawmaker also cites Facebook’s “predatory acquisition strategy” in which it buys up potential competitors before they can pose a threat, stifling innovation in the process. Cicilline also views the company’s decision to restrict API access for competing products as “evidence of anticompetitive conduct” from the social giant.
Cicilline also takes a familiar cynical view of Mark Zuckerberg’s recent announcement that Facebook would weave its products together in a move toward private messaging, calling it “a dangerous power grab to head off antitrust action.” That perspective gives us a clear glimpse of the what lies ahead for Facebook faces as the antitrust headwinds pick up around the 2020 presidential race.
“American antitrust agencies have not pursued a significant monopoly case in more than two decades, even as corporate concentration and monopoly power have reached historic levels,” Cicilline wrote.
“It’s clear that serious enforcement is long overdue.”[ + ]
Facebook and the ACLU issued a joint statement this morning, noting that they have settled a class action job discrimination suit. The ACLU filed the suit in September, along with Outten & Golden LLC and the Communications Workers of America, alleging that Facebook allowed employers to target ads based on categories like race, national origin, age and gender.
The initial charges were filed on behalf of female workers who alleged they were not served up employment opportunities based on gender. Obviously all of that’s against all sort of federal, state and local laws, including, notably, section VII of the Civil Rights Act of 1964.
Today’s announcement finds Facebook implementing “sweeping changes” to its advertising platform in order to address these substantial concerns. The company outlined a laundry list of “far-reaching changes and steps,” including the development of a separate ad portal to handle topics like housing, employment and credit (HEC) for Facebook, Instagram and Facebook Messenger.
Targeting based on gender, age and race will not be allowed within the confines of the new system. Ditto for the company’s Lookalike Audience tool, which is similarly designed to target customers based on things like gender, age, religious views and the like.
“Civil rights leaders and experts – including members of the Congressional Black Caucus, the Congressional Hispanic Caucus, the Congressional Asian Pacific American Caucus, and Laura Murphy, the highly respected civil rights leader who is overseeing the Facebook civil rights audit – have also raised valid concerns about this issue,” Sheryl Sandberg wrote in a blog post tied to the announcement. “We take their concerns seriously and, as part of our civil rights audit, engaged the noted civil rights law firm Relman, Dane & Colfax to review our ads tools and help us understand what more we could do to guard against misuse.”
In addition to the above portal, Facebook will be creating a one-stop site where users can search among all job listings, independent of how ads are served. The company has also promised to offer “educational materials to advertisers about these new anti-discrimination measures. Facebook will also be meeting regularly with the suit’s plaintiffs to assure that it is continuing to meet all of the parameters of the settlement.
“As the internet — and platforms like Facebook — play an increasing role in connecting us all to information related to economic opportunities, it’s crucial that micro-targeting not be used to exclude groups that already face discrimination,” ACLU senior staff attorney Galen Sherwin said in the joint statement. “We are pleased Facebook has agreed to take meaningful steps to ensure that discriminatory advertising practices are not given new life in the digital era, and we expect other tech companies to follow Facebook’s lead.”
Further details of the settlement haven’t been disclosed by either party, but the update is clearly a bit of a consolatory move from a company that’s landed itself on the wrong side of a large lawsuit. Even so, it ought to be regarded as a positive outcome for a problematic product offering.[ + ]
Snap is under NDA with the UK’s Home Office as part of a working group tasked with coming up with more robust age verification technology that’s able to robustly identify children online.
The detail emerged during a parliamentary committee hearing as MPs in the Department for Digital, Culture, Media and Sport (DCMS) questioned Stephen Collins, Snap’s senior director for public policy international, and Will Scougal, director of creative strategy EMEA.
A spokesman in the Home Office press office hadn’t immediately heard of any discussions with the messaging company on the topic of age verification. But we’ll update this story with any additional context on the department’s plans if more info is forthcoming.
Under questioning by the committee Snap conceded its current age verification systems are not able to prevent under 13 year olds from signing up to use its messaging platform.
The DCMS committee’s interest here is it’s running an enquiry into immersive and addictive technologies.
Snap admitted that the most popular means of signing up to its app (i.e. on mobile) is where its age verification system is weakest, with Collins saying it had no ability to drop a cookie to keep track of mobile users to try to prevent repeat attempts to get around its age gate.
But he emphasized Snap does not want underage users on its platform.
“That brings us no advantage, that brings us no commercial benefit at all,” he said. “We want to make it an enjoyable place for everybody using the platform.”
He also said Snap analyzes patterns of user behavior to try to identify underage users — investigating accounts and banning those which are “clearly” determined not to be old enough to use the service.
But he conceded there’s currently “no foolproof way” to prevent under 13s from signing up.
Discussing alternative approaches to verifying kids’ age online the Snap policy staffer agreed parental consent approaches are trivially easy for children to circumvent — such as by setting up spoof email accounts or taking a photo of a parent’s passport or credit card to use for verification.
Social media company Facebook is one such company that relies a ‘parental consent’ system to ‘verify’ the age of teen users — though, as we’ve previously reported, it’s trivially easy for kids to workaround.
“I think the most sustainable solution will be some kind of central verification system,” Collins suggested, adding that such a system is “already being discussed” by government ministers.
“The home secretary has tasked the Home Office and related agencies to look into this — we’re part of that working group,” he continued.
“We actually met just yesterday. I can’t give you the details here because I’m under an NDA,” Collins added, suggesting Snap could send the committee details in writing.
“I think it’s a serious attempt to really come to a proper conclusion — a fitting conclusion to this kind of conundrum that’s been there, actually, for a long time.”
“There needs to be a robust age verification system that we can all get behind,” he added.
The UK government is expected to publish a White Paper setting out its policy ideas for regulating social media and safety before the end of the winter.
The detail of its policy plans remain under wraps so it’s unclear whether the Home Office intends to include setting up a centralized system of online age verification for robustly identifying kids on social media platforms as part of its safety-focused regulation. But much of the debate driving the planned legislation has fixed on content risks for kids online.
Such a step would also not be the first time UK ministers have pushed the envelop around online age verification.
A controversial system of age checks for viewing adult content is due to come into force shortly in the UK under the Digital Economy Act — albeit, after a lengthy delay. (And ignoring all the hand-wringing about privacy and security risks; not to mention the fact age checks will likely be trivially easy to dodge by those who know how to use a VPN etc, or via accessing adult content on social media.)
But a centralized database of children for age verification purposes — if that is indeed the lines along which the Home Office is thinking — sounds rather closer to Chinese government Internet controls.
Given that, in recent years, the Chinese state has been pushing games companies to age verify users to enforce limits on play time for kids (also apparently in response to health concerns around video gaming addiction).
The UK has also pushed to create centralized databases of web browsers’ activity for law enforcement purposes, under the 2016 Investigatory Powers Act. (Parts of which it’s had to rethink following legal challenges, with other legal challenges ongoing.)
In recent years it has also emerged that UK spy agencies maintain bulk databases of citizens — known as ‘bulk personal datasets‘ — regardless of whether a particular individual is suspected of a crime.
So building yet another database to contain children’s ages isn’t perhaps as off piste as you might imagine for the country.
Returning to the DCMS committee’s enquiry, other questions for Snap from MPs included several critical ones related to its ‘streaks’ feature — whereby users who have been messaging each other regularly are encouraged not to stop the back and forth.
The parliamentarians raised constituent and industry concerns about the risk of peer pressure being piled on kids to keep the virtual streaks going.
Snap’s reps told the committee the feature is intended to be a “celebration” of close friendship, rather than being intentionally designed to make the platform sticky and so encourage stress.
Though they conceded users have no way to opt out of streak emoji appearing.
They also noted they have previously reduced the size of the streak emoji to make it less prominent.
But they added they would take concerns back to product teams and re-examine the feature in light of the criticism.
You can watch the full committee hearing with Snap here.[ + ]
Patreon couldn’t survive charging all creators just a 5 percent rake on the monthly subscriptions they earn from fans while building commerce tools like CRMs and merchandise to try to stay ahead of Twitch, YouTube and Google. But it also didn’t want to screw all its loyal early creators.
So today, Patreon is overhauling its pricing. Any creator can still get a 5 percent rate, but just for a Lite version without bonus tools or different fan tiers. All of Patreon’s extra features will now be in the Pro plan, with an 8 percent rate, but with existing creators grandfathered in at 5 percent. And the new Premium enterprise plan for 12 percent (9 percent for existing creators) will offer full-service merchandise sales, multi-user team accounts and dedicated customer support.
If you want the lower grandfathered rates, you’ll need to join Patreon in the next few weeks before the new rates go into effect in early May.
“With this change, Patreon is a long-term independent company that doesn’t need anyone else. That’s the move we’re making here,” says Patreon’s SVP of Product, Wyatt Jenkins. More sustainable pricing means creators won’t have to fear Patreon selling out in desperation to someone like Facebook that might neglect or exploit them.
Instead, Patreon CEO Jack Conte tells me he wants to balance powerful features with right-sized pricing for different creator types to become the platform-agnostic home for subscription patronage when tech giants are each trying to build their own. “To have a different membership for each distribution platform, that’s not going to work. You need a single place for the bottom of your distribution funnel,” Conte explains.
Patreon now has 3 million fans paying 100,000 creators more than half a billion dollars per year, and it will cross $1 billion in payouts in 2019 after six years in business. But Patreon was starving on its 5 percent rate, which some venture capitalists tell me is why they passed on its funding rounds totaling $105 million led by Thrive Capital and Index. Now it might make enough to keep the lights on, retain ownership and maybe even earn a profit one day.
Jenkins tells me Patreon spent a year talking to more than 1,000 creators to figure out how to re-price its offering. “People don’t like change. But I think in terms of change, we’re going to be able to invest in the different products in different ways. We can put a lot of horsepower into membership,” he explains. The company didn’t want to screw up like when it changed its payment processing rates a year ago, leading to creator backlash and some exodus. “We unilaterally did something that impacted creators’ patrons. That was the real landmine we stepped on.”
What Patreon discovered was some creators, especially individuals and hobbyists, didn’t care for bells and whistles. They wanted cheap and easy recurring payments so they can focus on their art, so Patreon made the 5 percent Lite plan that strips out the extra features but keeps the old rate.
More serious videographers, illustrators, comedians and pundits wanted to offer different price tiers for different levels of exclusive content. They need analytics, special offers, integrations with other productivity and commerce apps and priority customer support when things break. That’s what creators will get for 8 percent, unless they’re grandfathered in at 5 percent.
But Patreon also found there were whole media organizations with 50 employees built atop its patronage platform. They needed to be able to share accounts and get immediate support when necessary. Meanwhile, tons of creators see merchandise as a powerful way to lure in fans who want signed photos, stickers and other swag each month. “Eighty-five percent of our creators tell us we need merchandise. ‘We spend our days in the post office licking stamps. You can get great negotiation leverage since you have scale, so why aren’t you helping us with this?’ We can’t build that on 5 percent,” Jenkins tells me. They’ll all pay the 12 percent Premium plan price unless grandfathered in at 9 percent. Patreon will, in return, process, pack and ship all their merchandise.
Patreon is also changing its payment processing fees to make sure it doesn’t overpenalize smaller contributions, like creators’ popular $1 per month tiers. Now all transactions over $3 incur a 2.9 percent plus $0.30 fee similar to Stripe’s industry standard, while microtransactions under $3 cost 5 percent plus $0.10. Existing creators get the old rates, and people paying via PayPal from outside the U.S. get hit with an extra 1 percent fee.
Surprisingly, one of Patreon’s most popular creators told me they actually felt bad about being grandfathered in at a lower price, because why should they get special treatment compared to other artists who just might not be as tech savvy. That said, they weren’t going to voluntarily pay a higher rate. “I guess I’m not surprised,” Conte responds. “I’ve found that creators are really humble and selfless, always thinking about other people. I can imagine them saying ‘What about these people? Why am I paying less than them?”
If Patreon can power through the rate change without breaking momentum, it could have a bright future. It’s started a patronage trend, but leaked documents show Facebook plans to charge creators up to 30 percent like YouTube already does, and Twitch charges an astronomical 50 percent. But with far more restrictions on content and far more distrust accrued after years of forsaking creators and tense negotiations, Patreon’s neutral platform with the cheapest rate could remain the fan subscription leader at a time when ad revenue shares are proving inadequate to support turning one’s passion into their profession.
When TechCrunch broke the news that Facebook planned to charge up to 30 percent, Conte said, “Honestly, it was relieving but really disappointing in some way. I think competition is good. I hope there are many membership products. I hope they’re successful and [give creators a choice]. Right now, it’s not a choice. Facebook’s product is not usable. The folks that have used Facebook’s product have turned it off. From a competitor standpoint, it confirmed my thought that Facebook doesn’t understand creators.”
That’s also why he hopes that one day the tech giants might just integrate Patreon rather than compete, and they could each get a cut of subscription revenue.
Looking forward, he says the toughest challenge for Patreon will be building three different products for three distinct types of creators without the infinite wallets of its rivals. “I think Patreon will be raising for a long time,” Conte says. That will fund Patreon’s plans for eventual international operations, where 40 percent of patrons and 75 percent of creators live. Right now Patreon is offered only in English and supports U.S. dollars. But if it can spin up local languages, currencies and payment processors, Patreon could be where creators around the world go to share with their biggest fans.[ + ]
Instagram is opening a whole new revenue stream. Now the 130 million people who tap Instagram’s product tags on shopping posts will be able to buy those items without leaving the app, thanks to stored payment info. “Checkout with Instagram” launches today in the U.S. with more than 20 top brands, including Adidas, Kylie Cosmetics and Warby Parker, which will no longer have to direct customers to their websites to make a purchase.
An Instagram spokesperson confirms to TechCrunch, “We will introduce a selling fee to help fund programs and products that help make checkout possible, as well as offset transaction-related expenses.” When we asked how much the “selling fee” would charge merchants, the spokesperson told us, “We aren’t sharing the specific number right now. We are testing a selling fee with businesses during the closed beta. It will not change the price of the items for consumers.” That indicates Instagram wants merchants to cough up the fee in exchange for higher purchase conversion rates rather than forcing users to pay a convenience fee for buying through the app.
Instagram’s ad business could also get a boost as Checkout could convince brands that the social network produces better return on investment because there are fewer steps before purchase. For now, only organic posts from the launch partner merchants will feature Checkout buttons, and ads aren’t eligible. But Checkout-equipped ads could be a gold mine for Instagram, just as Facebook’s News Feed ad business looks shaky and CEO Mark Zuckerberg declares commerce as a fixture of the 2019 roadmap.
Checkout tags will appear on feed posts, Stories and Explore content from the brands in the closed beta that Instagram plans to eventually open to more businesses. When users tap the post to reveal product tags and open one, they’ll see a Checkout with Instagram button instead of the old “View on Website” button.
Their first time through they’ll enter their payment information, which is stored for future purchases. “With their protected payment information in one place, they can shop their favorite brands without needing to log in and enter their information multiple times,” Instagram explains. Saving merchants from abandoned shopping carts left by users frustrated with having to sign up with each different brand is the key value offered here. TechCrunch recently reported Instagram is prototyping a Fundraiser sticker for Stories that similarly saves payment info — a database Instagram clearly wants to build up.
After users buy something within Instagram, they’ll be able to track it from a new “Orders” section of their profile that shows the status of an order, plus options to cancel, initiate a return or contact the merchant. They’ll also get a notification from Instagram when the order ships. Interestingly, Instagram isn’t mixing receipts into its messaging product like Facebook does with Messenger.
Merchants will only get the details necessary to fulfill an order, including contact info and address, but not your actual payment info. Users will see an opt-in option to share their email address with the seller for marketing purposes. Checkout with Instagram could leave merchants with a little less data than if the purchase happened on their website. But Instagram says it will provide info on which sales it generates for a merchant.
Users can pay with PayPal, Visa, Mastercard, American Express or Discover. Instagram plans to allow merchants to integrate their Shopify, BigCommerce, ChannelAdvisor, CommerceHub and other tools with the Checkout feature. Meanwhile, Instagram confirms that interacting with Checkout will be used as a signal for ranking which content you see. Payments are processed by PayPal — an area of business Facebook has been content not to invade, and PayPal’s fees will likely be covered by Instagram’s selling fee.
“We started using product tags to make shopping more convenient for our customers,” writes Warby Parker co-founder and co-CEO Neil Blumenthal. “Checkout takes this experience one step further, making it even more intuitive and seamless for people who have discovered products they want to purchase instantaneously.” Here’s the full list of launch partner brands: Adidas, Anastasia Beverly Hills, Balmain, Burberry, ColourPop, Dior, Huda Beauty, H&M, KKW Beauty, Kylie Cosmetics, MAC Cosmetics, Michael Kors, NARS, Nike, NYX Cosmetics, Oscar de la Renta, Outdoor Voices, Ouai Hair, Prada, Revolve, Uniqlo, Warby Parker and Zara.
There’s still no sign of an Instagram standalone shopping app that was reportedly in the works. Instead, it launched a dedicated Shopping channel in Explore and tags for Stories six months ago. We recently spotted Instagram prototyping a Pinterest-style feature that would let users make publicly visible their private Collections of Saved posts. That would be a great way for commerce influencers to recommend Checkout-equipped products. Facebook has spent five years experimenting with different Buy buttons, but now it finally has them in a place they feel natural.
Instagram has fiercely protected the right to link out of its app in order to keep you steadily consuming its content. Now with more than one billion users, Instagram has trapped people’s attention inside, and it’s finally ready to sell the right to sell there.[ + ]
In a blog post, Facebook vice president and deputy general counsel Chris Sonderby said that the video was viewed less than 200 times during the live broadcast, during which no users reported the video. Including views during the live broadcast, the video was viewed about 4,000 times before it was removed from Facebook. It was first reported 29 minutes after it started streaming, or 12 minutes after it had ended. Sonderby said a link to a copy was posted onto 8chan, the message board that played a major role in the the video’s propogation online, before Facebook was alerted to it.
Before the shootings the suspect, a 28-year-old white man, posted an anti-Muslim and pro-facism manifesto. Sonderby said the shooter’s personal accounts had been removed from Facebook and Instagram, and that it is “actively identifying and removing” imposter accounts.
Facebook’s new numbers come one day after the company said it had removed about 1.5 million videos of the shooting in the first 24 hours after the attack, including 1.2 million that were blocked at upload, and therefore not available for viewing. But that means it failed to block 20 percent of those videos, or 300,000, which were uploaded to the platform and therefore could be watched.
Both sets of figures, while meant to provide transparency, seem unlikely to quell criticism of the social media platform’s role in spreading violent videos and dangerous ideologies, especially since Facebook Live launched three years ago. They call into question why the platform is still heavily reliant on user reports, despite its AI and machine learning-based moderation tools, and why removals don’t happen more quickly, especially during a crisis (and even routine moderation takes a deep psychological toll on the human monitors tasked with filling in the gaps left by AI). The challenges of moderation on a platform of Facebook’s scale (it now claims more than 2 billion monthly users).
Sonderby also said that the company has hashed the original Facebook Live video to help detect and remove other visually similar videos from Facebook and Instagram. It has also shared more than 800 visually-distinct video related to the attack through a database it shares with members of the Global Internet Forum to Counter Terrorism (GIFCT). “This incident highlights the importance of industry cooperation regarding the range of terrorists and violent extremists operating online,” he wrote.
Other online platforms, however, have also struggled to stop the video’s spread. For example, uploaders were able to use minor modifications, like watermarks or altering the size of clips, to stymie YouTube’s content moderation tools.[ + ]