In a rare moment of agreement, members of the House Judiciary Committee from both major political parties agreed that Facebook needed to take down Pages that bullied shooting survivors or called for more violence. The hearing regarding social media filtering practices saw policy staffers from Facebook, Google and Twitter answering questions, though Facebook absorbed the brunt of the ire. The hearing included Republican Representative Steve King ask “What about converting the large behemoth organizations that we’re talking about here into public utilities?”
The meatiest part of the hearing centered on whether social media platforms should delete accounts of conspiracy theorists and those inciting violence, rather than just removing the offending posts.
The issue has been a huge pain point for Facebook this week after giving vague answers for why it hasn’t deleted known faker Alex Jones’ Infowars Page, and tweeting that “We see Pages on both the left and the right pumping out what they consider opinion or analysis – but others call fake news.” Facebook’s Head of Global Policy Management Monica Bickert today reiterated that “sharing information that is false does not violate our policies.”
As I detailed in this opinion piece, I think the right solution is to quarantine the Pages of Infowars and similar fake news, preventing their posts or shares of links to their web domain from getting any visibility in the News Feed. But deleting the Page without instances of it directly inciting violence would make Jones a martyr and strengthen his counterfactual movement. Deletion should be reserved for those that blatantly encourage acts of violence.
Rep. Ted Deutch (D-Florida) asked about how Infowars’ claims in YouTube videos that Parkland shooting’s survivors were crisis actors squared with the company’s policy. Google’s Global Head of Public Policy and Government Relations for YouTube Juniper Downs explained that “We have a specific policy that says that if you say a well-documented violent attack didn’t happen and you use the name or image of the survivors or victims of that attack, that is a malicious attack and it violates our policy.” She noted that YouTube has a “three strikes” policy, it is “demoting low-quality content and promoting more authoritative content,” and it’s now showing boxes atop result pages for problematic searches, like “is the earth flat?” with facts to dispel conspiracies.
Facebook’s answer was much less clear. Bickert told Deutch that “We do use a strikes model. What that means is that if a Page, or profile, or group is posting content and some of that violates our polices, we always remove the violating posts at a certain point” (emphasis mine). That’s where Facebook became suddenly less transparent.
“It depends on the nature of the content that is violating our policies. At a certain point we would also remove the Page, or the profile, or the group at issue,” Bickert continued. Deutch then asked how many strikes conspiracy theorists get. Bickert noted that “crisis actors” claims violate its policy and its removes that content. “And we would continue to remove any violations from the Infowars Page.” But regarding Page-level removals, she got wishy-washy, saying, “If they posted sufficient content that it would violate our threshold, then the page would come down. The threshold varies depending on the different types of violations.”
Rep. Matt Gaetz (R-Florida) gave the conservatives’ side of the same argument, citing two posts by the Facebook Page “Milkshakes Against The Republican Party” that called for violence, including one that saying “Remember the shooting at the Republican baseball game? One of those should happen every week.”
While these posts have been removed, Gaetz asked why the Page hadn’t. Bickert noted that “There’s no place for any calls for violence on Facebook.” Regarding the threshold, she did reveal that “When someone posts an image of child sexual abuse imagery their account will come down right away. There are different thresholds for different violations.” But she repeatedly refused to make a judgement call about whether the Page should be removed until she could review it with her team.
Showing surprising alignment in such a fractured political era, Democratic Representative Jamie Raskin of Florida said “I’m agreeing with the chairman about this and I think we arrived at the same exact same place when we were taking about at what threshold does Infowars have their Page taken down after they repeatedly denied the historical reality of massacres of children in public school.”
Facebook can’t rely on a shadowy “the threshold varies” explanation any more. It must outline exactly what types of violations incur not only post removal but strikes against their authors. Perhaps that’s something like “one strike for posts of child sexual abuse, three posts for inciting violence, five posts for bullying victims or denying documented tragedies occurred, and unlimited posts of less urgently dangerous false information.”
Whatever the specifics, Facebook needs to provide specifics. Until then, both liberals and conservatives will rightly claim that enforcement is haphazard and opaque.
For more from today’s hearing:
Amidst questions about how conspiracy theorists’ accounts are treated during today’s House Judiciary hearing with Facebook, Google and Twitter on social media filtering practices, Representative Steve King (R-Iowa) dropped a bombshell. “What about converting the large behemoth organizations that we’re talking about here into public utilities?”
King’s suggestion followed his inquiries about right-wing outlet Gateway Pundit losing reach on social media and how Facebook’s algorithm worked. The insinuation was that these companies cannot properly maintain fair platforms for discourse.
The representative also suggested there may be need for “review” of Section 230 of the Communications Decency Act that protects interactive computer services from being treated as the publisher of content users post on their platforms. If that rule was changed, social media companies could be held responsible for illegal content, from copyright infringement or child pornography, appearing on their platform. That would potentially cripple the social media industry, requiring extensive pre-vetting of any content they display.
The share prices of the tech giants did not see significant declines upon the representative’s comments, indicating the markets don’t necessarily fear that overbearing regulation of this nature is likely.
Here’s the exchange between King and Google’s Global Head of Public Policy and Government Relations for YouTube Juniper Downs:
King: “Ms. Downs, I think you have a sense of my concern about where this is going. I’m all for freedom of speech, and free enterprise, and for competition and finding a way that competition itself does its own regulation so government doesn’t have to. But if this gets further out of hand, it appears to me that Section 230 needs to be reviewed.
And one of the discussions that I’m hearing is ‘what about converting the large behemoth organizations that we’re talking about here into public utilities?’ How do you respond to that inquiry?”
Downs: “As I said previously, we operate in a highly competitive environment, the tech industry is incredibly dynamic, we see new entrants all the time. We see competitors across all of our products at Google, and we believe that the framework that governs our services is an appropriate way to continue to support innovation.”
Unfortunately, many of the representatives frittered away their five minutes each asking questions that companies had already answered in previous congressional hearings or public announcements, allowing them to burn the time without providing much new information. Republican reps focused many questions on whether social media platforms are biased against conservatives. Democrats cited studies saying metrics do not show this bias, and concentrated their questions on how the platforms could protect elections from disinformation.
Protestors during the hearing held up signs behind Facebook’s Head of Global Policy Management Monica Bickert showing Facebook CEO Mark Zuckerberg and COO Sheryl Sandberg as heads of an octopus sitting upon a globe, but the protestors were later removed.
One surprise was when Representative Jerrold Nadler (D-New York) motioned to cut the hearing for an executive session to discuss President Trump’s comments at the Helsinki press conference yesterday that he said were submissive to Russian president Vladimir Putin. However, the motion was defeated 12-10.
Later in the hearing, Facebook came under cross-fire from Democrats and Republicans over allowing Pages that bullied massacre survivors and encouraged more shootings. Facebook defended itself saying it removes these kinds of posts. “If they posted sufficient content that it would violate our threshold, then the Page would come down,” Facebook’s Bickert noted, but became less transparent saying, “The threshold varies depending on the different types of violations.”
Facebook will need to come up with a much clearer rubric for exactly how that threshold varies, and make that publicly available, or it will continue to be seen as indecisive and lacking in proper response.
For more from today’s hearing:
The UK’s Electoral Commission has published the results of a near nine-month-long investigation into Brexit referendum spending and has found that the official Vote Leave campaign broke the law by breaching election campaign spending limits.
It says Vote Leave broke the law including by channeling money to a Canadian data firm, AggregateIQ, to use for targeting political advertising on Facebook’s platform, via undeclared joint working with another Brexit campaign, BeLeave.
Aggregate IQ remains the subject of a separate joint investigation by privacy watchdogs in Canada and British Columbia.
The Electoral Commission’s investigation found evidence that BeLeave spent more than
Apple’s App Store continues to outpace Google Play on revenue. In the first half of the year, the App Store generated nearly double the revenue of Google Play on half the downloads, according to a new report from Sensor Tower out today. In terms of dollars and cents, that’s $22.6 billion in worldwide gross app revenue on the App Store versus $11.8 billion for Google Play – or, 1.9 times more spent on the App Store compared with what was spent on Google Play.
This trend is not new. Apple’s iOS store has consistently generated more revenue than its Android counterpart for years due to a number of factors – including the fact that Android users historically have spent less on apps than iOS users, as well as the fact that there are other Android app stores consumer can shop – like the Amazon Appstore or Samsung Store, for example. In addition, Google Play is not available in China, but Apple’s App Store is.
Last year, consumer spending on the App Store reached $38.5 billion, again nearly double that of Google Play’s $20.1 billion.
As the new figures for the first half of 2018 indicate, consumer spending is up this year.
Sensor Tower estimates it has increased by 26.8 percent on iOS compared with the same period in 2017, and it’s up by 29.7 percent on Google Play.
The growth in spending can be partly attributed to subscription apps like Netflix, Tencent Video, and even Tinder, as has been previously reported.
Subscription-based apps are big businesses these days, having helped to boost app revenue in 2017 by 77 percent to reach $781 million, according to an earlier study. Netflix was also 2017’s top non-game app by revenue, and recently became ranked as the top (non-game) app of all-time by worldwide consumer spend, according to App Annie’s App Store retrospective.
Many of the other all-time top apps following Netflix were also subscription-based, including Spotify (#2), Pandora (#3), Tencent Video (#4), Tinder (#5), and HBO NOW (#8), for example.
And Netflix is again the top non-game app by consumer spending in the first half of 2018, notes Sensor Tower.
Game spending, however, continues to account for a huge chunk of revenue.
Consumer spending on games grew 19.1 percent in the first half of 2018 to $26.6 billion across both stores, representing roughly 78 percent of the total spent ($16.3 billion on the App Store and $10.3 billion on Google Play). Honor of Kings from Tencent, Monster Strike from Mixi, and Fate/Grand Order from Sony Aniplex were the top grossing games across both stores.
App downloads were also up in the first half of the year, if by a smaller percentage.
Worldwide first-time app installs grew to 51 billion in 1H18, or up 11.3 percent compared with the same time last year, when downloads were then 45.8 billion across the two app stores.
Facebook led the way on this front with WhatsApp, Messenger, Facebook and Instagram as the top four apps across both the App Store and Google Play combined. The most downloaded games were PUBG Mobile from Tencent, Helix Jump from Voodoo, and Subway Surfers from Kiloo.
Google Play app downloads were up a bit more (13.1 percent vs iOS’s 10.6 percent) year-over-year due to Android’s reach in developing markets, reaching 36 billion. That’s around 2.4 times the App Store’s 15 billion.
Despite this, Apple’s platform still earned more than double the revenue with fewer than half the downloads, which is remarkable. And it can’t all be chalked up to China. (The country contributed about 31.7 percent of the App Store revenue last half, or $7.1 billion, to give you an idea.)
Sensor Tower tells TechCrunch that even if China was removed from the picture, the App Store would have generated $15.4 billion gross revenue for first half of 2018, which is still about 30 percent higher than Google Play’s $11.8 billion.
Correction: China accounted about about 31.7% of App Store revenue last half, not quarter as previously stated.
You should cringe when you read that hashtag. Because it’s a reminder that people are being socially engineered by technology platforms to objectify and spy on each other for voyeuristic pleasure and profit.
The short version of the story attached to the cringeworthy hashtag is this: Earlier this month an individual, called Rosey Blair, spent all the hours of a plane flight using her smartphone and social media feeds to invade the privacy of her seat neighbors — publicly gossiping about the lives of two strangers.
Her speculation was set against a backdrop of rearview creepshots, with a few barely there scribbles added to blot out actual facial features. Even as an entire privacy invading narrative was being spun unknowingly around them.
#PlanePrivacyInvasion would be a more fitting hashtag. Or #MoralVacuumAt35000ft
And yet our youthful surveillance society started with a far loftier idea associated with it: Citizen journalism.
Once we’re all armed with powerful smartphones and ubiquitously fast Internet there will be no limits to the genuinely important reportage that will flow, we were told.
There will be no way for the powerful to withhold the truth from the people.
At least that was the nirvana we were sold.
What did we get? Something that looks much closer to mass manipulation. A tsunami of ad stalking, intentionally fake news and social media-enabled demagogues expertly appropriating these very same tools by gamifying mind-less, ethically nil algorithms.
Meanwhile, masses of ordinary people + ubiquitous smartphones + omnipresent social media feeds seems, for the most part, to be resulting in a kind of mainstream attention deficit disorder.
Yes, there is citizen journalism — such as people recording and broadcasting everyday experiences of aggression, racism and sexism, for example. Experiences that might otherwise go unreported, and which are definitely underreported.
That is certainly important.
But there are also these telling moments of #hashtaggable ethical blackout. As a result of what? Let’s call it the lure of ‘citizen clickbait’ — as people use their devices and feeds to mimic the worst kind of tabloid celebrity gossip ‘journalism’ by turning their attention and high tech tools on strangers, with (apparently) no major motivation beyond the simple fact that they can. Because technology is enabling them.
Social norms and common courtesy should kick in and prevent this. But social media is pushing in an unequal and opposite direction, encouraging users to turn anything — even strangers’ lives — into raw material to be repackaged as ‘content’ and flung out for voyeuristic entertainment.
It’s life reflecting commerce. But a particularly insidious form of commerce that does not accept editorial let alone ethical responsibility, has few (if any) moral standards, and relies, for continued function, upon stripping away society’s collective sense of privacy in order that these self-styled ‘sharing’ (‘taking’ is closer to the mark) platforms can swell in size and profit.
But it’s even worse than that. Social media as a data-mining, ad-targeting enterprise relies upon eroding our belief in privacy. So these platforms worry away at that by trying to disrupt our understanding of what privacy means. Because if you were to consider what another person thinks or feels — even for a millisecond — you might not post whatever piece of ‘content’ you had in mind.
For the platforms it’s far better if you just forget to think.
Facebook’s business is all about applying engineering ingenuity to eradicate the thoughtful friction of personal and societal conscience.
That’s why, for instance, it uses facial recognition technology to automate content identification — meaning there’s almost no opportunity for individual conscience to kick in and pipe up to quietly suggest that publicly tagging others in a piece of content isn’t actually the right thing to do.
Because it’s polite to ask permission first.
But Facebook’s antisocial automation pushes people away from thinking to ask for permission. There’s no button provided for that. The platform encourages us to forget all about the existence of common courtesies.
So we should not be at all surprised that such fundamental abuses of corporate power are themselves trickling down to infect the people who use and are exposed to these platforms’ skewed norms.
Viral episodes like #PlaneBae demonstrate that the same sense of entitlement to private information is being actively passed onto the users these platforms prey on and feed off — and is then getting beamed out, like radiation, to harm the people around them.
The damage is collective when societal norms are undermined.
Social media’s ubiquity means almost everyone works in marketing these days. Most people are marketing their own lives — posting photos of their pets, their kids, the latte they had this morning, the hipster gym where they work out — having been nudged to perform this unpaid labor by the platforms that profit from it.
The irony is that most of this work is being done for free. Only the platforms are being paid. Though there are some people making a very modern living; the new breed of ‘life sharers’ who willingly polish, package and post their professional existence as a brand of aspiration lifestyle marketing.
Social media’s gift to the world is that anyone can be a self-styled model now, and every passing moment a fashion shoot for hire — thanks to the largess of highly accessible social media platforms providing almost anyone who wants it with their own self-promoting shopwindow in the world. Plus all the promotional tools they could ever need.
Just step up to the glass and shoot.
And then your vacation beauty spot becomes just another backdrop for the next aspirational selfie. Although those aquamarine waters can’t be allowed to dampen or disrupt photo-coifed tresses, nor sand get in the camera kit. In any case, the makeup took hours to apply and there’s the next selfie to take…
What does the unchronicled life of these professional platform performers look like? A mess of preparation for projecting perfection, presumably, with life’s quotidian business stuffed higgledy piggledy into the margins — where they actually sweat and work to deliver the lie of a lifestyle dream.
Because these are also fakes — beautiful fakes, but fakes nonetheless.
We live in an age of entitled pretence. And while it may be totally fine for an individual to construct a fictional narrative that dresses up the substance of their existence, it’s certainly not okay to pull anyone else into your pantomime. Not without asking permission first.
But the problem is that social media is now so powerfully omnipresent its center of gravity is actively trying to pull everyone in — and its antisocial impacts frequently spill out and over the rest of us. And they rarely if ever ask for consent.
What about the people who don’t want their lives to be appropriated as digital windowdressing? Who weren’t asking for their identity to be held up for public consumption? Who don’t want to participate in this game at all — neither to personally profit from it, nor to have their privacy trampled by it?
The problem is the push and pull of platforms against privacy has become so aggressive, so virulent, that societal norms that protect and benefit us all — like empathy, like respect — are getting squeezed and sucked in.
The ugliness is especially visible in these ‘viral’ moments when other people’s lives are snatched and consumed voraciously on the hoof — as yet more content for rapacious feeds.
Think too of the fitness celebrity who posted a creepshot + commentary about a less slim person working out at their gym.
Or the YouTuber parents who monetize videos of their kids’ distress.
Or the men who post creepshots of women eating in public — and try to claim it’s an online art project rather than what it actually is: A privacy violation and misogynistic attack.
Or, on a public street in London one day, I saw a couple of giggling teenage girls watching a man at a bus stop who was clearly mentally unwell. Pulling out a smartphone, one girl hissed to the other: “We’ve got to put this on YouTube.”
For platforms built by technologists without thought for anything other than growth, everything is a potential spectacle. Everything is a potential post.
So they press on their users to think less. And they profit at society’s expense.
It’s only now, after social media has embedded itself everywhere, that platforms are being called out for their moral vacuum; for building systems that encourage abject mindlessness in users — and serve up content so bleak it represents a form of visual cancer.
Human have always told stories. Weaving our own narratives is both how we communicate and how we make sense of personal experience — creating order out of events that are often disorderly, random, even chaotic.
The human condition demands a degree of pattern-spotting for survival’s sake; so we can pick our individual path out of the gloom.
But platforms are exploiting that innate aspect of our character. And we, as individuals, need to get much, much better at spotting what they’re doing to us.
We need to recognize how they are manipulating us; what they are encouraging us to do — with each new feature nudge and dark pattern design choice.
We need to understand their underlying pull. The fact they profit by setting us as spies against each other. We need to wake up, personally and collectively, to social media’s antisocial impacts.
Perspective should not have to come at the expense of other people getting hurt.
This week the women whose privacy was thoughtlessly repackaged as public entertainment when she was branded and broadcast as #PlaneBae — and who has suffered harassment and yet more unwelcome attention as a direct result — gave a statement to Business Insider.
“#PlaneBae is not a romance — it is a digital-age cautionary tale about privacy, identity, ethics and consent,” she writes. “Please continue to respect my privacy, and my desire to remain anonymous.”
And as a strategy to push against the antisocial incursions of social media, remembering to respect people’s privacy is a great place to start.
Facebook is continuing to devote more resources to the development of AI-focused chips, bringing aboard a senior director of engineering from Google who worked on chips for Google’s products to lead its efforts, Bloomberg reports.
We’ve reached out to Google and Facebook for confirmation.
Shahriar Rabii spent nearly seven years at Google before joining Facebook this month as its VP and Head of Silicon according to his LinkedIn profile.
Facebook’s work on AI-focused custom silicon has been the topic of rumors and reports over the past several months. It’s undoubtedly a bold direction for the company, though it’s unclear how interested Facebook is in creating custom silicon for consumer devices or if they’re more focused on building for their server business as they also look to accelerate their own research efforts.
Rabii’s work at Google seemed to encompass a good deal of work on chips for consumer devices, specifically work on the Pixel 2’s Visual Core chip, which brought machine learning intelligence to the device’s camera.
Facebook has long held hardware ambitions, but its Building 8 hardware division appears to be closer than ever to shipping its first products as the company’s rumored work on an Echo Show competitor touchscreen smart speaker continues. Meanwhile, Facebook has also continued building virtual reality hardware built on Qualcomm’s mobile chipsets.
As Silicon Valley’s top tech companies continue to compete aggressively for talent amongst artificial intelligence experts, this marks another departure from Google. Earlier this year, Apple poached Google’s AI head.
Alex Jones’ Infowars is a fake-news peddler. But Facebook deleting its Page could ignite a fire that consumes the network. Still, some critics are asking why it hasn’t done so already.
This week Facebook held an event with journalists to discuss how it combats fake news. The company’s recently appointed head of News Feed John Hegeman explained that, “I guess just for being false, that doesn’t violate the community standards. I think part of the fundamental thing here is that we created Facebook to be a place where different people can have a voice.”
In response, CNN’s Oliver Darcy tweeted: “I asked them why InfoWars is still allowed on the platform. I didn’t get a good answer.” BuzzFeed’s Charlie Warzel meanwhile wrote that allowing the Infowars Page to exist shows that “Facebook simply isn’t willing to make the hard choices necessary to tackle fake news.”
Facebook’s own Twitter account tried to rebuke Darcy by tweeting, “We see Pages on both the left and the right pumping out what they consider opinion or analysis – but others call fake news. We believe banning these Pages would be contrary to the basic principles of free speech.” But harm can be minimized without full-on censorship.
There is no doubt that Facebook hides behind political neutrality. It fears driving away conservative users for both business and stated mission reasons. That strategy is exploited by those like Jones who know that no matter how extreme and damaging their actions, they’ll benefit from equivocation that implies “both sides are guilty,” with no regard for degree.
Instead of being banned from Facebook, Infowars and sites like it that constantly and purposely share dangerous hoaxes and conspiracy theories should be heavily down-ranked in the News Feed.
Effectively, they should be quarantined, so that when they or their followers share their links, no one else sees them.
“We don’t have a policy that stipulates that everything posted on Facebook must be true — you can imagine how hard that would be to enforce,” a Facebook spokesperson told TechCrunch. “But there’s a very real tension here. We work hard to find the right balance between encouraging free expression and promoting a safe and authentic community, and we believe that down-ranking inauthentic content strikes that balance. In other words, we allow people to post it as a form of expression, but we’re not going to show it at the top of News Feed.”
Facebook already reduces the future views of posts by roughly 80 percent when they’re established as false by its third-party fact checkers like PolitiFact and the Associated Press. For repeat offenders, I think that reduction in visibility should be closer to 100 percent of News Feed views. What Facebook does do to those whose posts are frequently labeled as false by its checkers is “remove their monetization and advertising privileges to cut off financial incentives, and dramatically reduce the distribution of all of their Page-level or domain-level content on Facebook.”
The company wouldn’t comment directly about whether Infowars has already been hit with that penalty, noting, “We can’t disclose whether specific Pages or domains are receiving such a demotion (it becomes a privacy issue).” For any story fact-checked as false, it shows related articles from legitimate publications to provide other perspectives on the topic, and notifies people who have shared it or are about to.
But that doesn’t solve for the initial surge of traffic. Unfortunately, Facebook’s limited array of fact-checking partners are strapped with so much work, they can only get to so many BS stories quickly. That’s a strong endorsement for more funding to be dedicated to these organizations like Snopes, preferably by even-keeled nonprofits, though the risks of governments or Facebook chipping in might be worth it.
Given that fact-checking will likely never scale to be instantly responsive to all fake news in all languages, Facebook needs a more drastic option to curtail the spread of this democracy-harming content on its platform. That might mean a full loss of News Feed posting privileges for a certain period of time. That might mean that links re-shared by the supporters or agents of these pages get zero distribution in the feed.
But it shouldn’t mean their posts or Pages are deleted, or that their links can’t be opened unless they clearly violate Facebook’s core content policies.
Why downranking and quarantine? Because banning would only stoke conspiratorial curiosity about these inaccurate outlets. Trolls will use the bans as a badge of honor, saying, “Facebook deleted us because it knows what we say is true.”
They’ll claim they’ve been unfairly removed from the proxy for public discourse that exists because of the size of Facebook’s private platform.
People who swallowed the propaganda of “her emails,” much of which was pushed by Alex Jones himself, assumed that Hillary Clinton’s deleted emails must have contained evidence of some unspeakable wrongdoing — something so bad it outweighed anything done by her opponent, even when the accusations against him had evidence and witnesses aplenty.
If Facebook deleted the Pages of Infowars and their ilk, it would be used as a rallying cry that Jones’ claims were actually clairvoyant. That he must have had even worse truths to tell about his enemies and so he had to be cut down. It would turn him into a martyr.
Those who benefit from Infowars’ bluster would use Facebook’s removal of its Page as evidence that it’s massively biased against conservatives. They’d push their political allies to vindictively regulate Facebook beyond what’s actually necessary. They’d call for people to delete their Facebook accounts and decamp to some other network that’s much more of a filter bubble than what some consider Facebook to already be. That would further divide the country and the world.
When someone has a terrible, contagious disease, we don’t execute them. We quarantine them. That’s what should happen here. The exception should be for posts that cause physical harm offline. That will require tough judgement calls, but knowingly inciting mob violence, for example, should not be tolerated. Some of Infowars’ posts, such as those about Pizzagate that led to a shooting, might qualify for deletion by that standard.
Facebook is already trying to grapple with this after rumors and fake news spread through forwarded WhatsApp messages have led to crowds lynching people in India and attacks in Myanmar. Peer-to-peer chat lacks the same centralized actors to ban, though WhatsApp is now at least marking messages as forwarded, and it will need to do more. But for less threatening yet still blatantly false news, quarantining may be sufficient. This also leaves room for counterspeech, where disagreeing commenters can refute posts or share their own rebuttals.
Few people regularly visit the Facebook Pages they follow. They wait for the content to come to them through the News Feed posts of the Page, and their friends. Eliminating that virality vector would severely limit this fake news’ ability to spread without requiring the posts or Pages to be deleted, or the links to be rendered unopenable.
If Facebook wants to uphold a base level of free speech, it may be prudent to let the liars have their voice. However, Facebook is under no obligation to amplify that speech, and the fakers have no entitlement for their speech to be amplified.
Image Credit: Getty – Tom Williams/CQ Roll Call, Flickr Sean P. Anderson CC
A new analysis by real estate resource BuildZoom sheds additional light on the Menlo Park-based company’s efforts to build a satellite virtual reality HQ in and around Seattle. Over the last three years, Facebook has spent $106 million on construction and development permits for Oculus offices in Redmond.
In 2018 alone, Facebook spent $88.3 million on Oculus -related permits for as many as eight new offices in the area. BuildZoom’s analysis identifies five properties in particular, all on Willow Road in Redmond, that span more than 90,000 square feet of lab and office space. Those locations are 10545 Willows Rd., 10785 Willows Rd., 9805 Willows Rd., 9845 Willows Rd. and 9461 Willow Road.
Last November, Seattle-based news site GeekWire reported that Facebook was on the hunt for 200,000 square feet worth of R&D space in Redmond, to expand its existing Oculus research efforts there. At the time, Oculus listed more than 60 job positions in Redmond in additional to a smaller amount of hiring for its Oculus operations in Seattle proper. Oculus is currently hiring for 121 positions in Redmond, with 42 of them in research.
TechCrunch reached out to Facebook about its plans for the new Oculus offices but the company declined to comment. Late last year, an Oculus spokesperson told TechCrunch that the company is growing its Seattle team to achieve its goal to “get 1 billion people into VR.” This May, Oculus announced that its Oculus Research division would be rebranded as Facebook Reality Labs.
The growing Oculus offices join nearby Valve, Microsoft’s HoloLens and other VR operations nearby to cement Seattle as one of tech’s major VR hubs beyond Silicon Valley.
Facebook has released its fifth diversity report, and it’s fine. Unless companies fire everyone and start over, we’re not going to see drastic improvements anytime soon.
“A critical lesson we’ve learned is that recruiting, retaining and developing a diverse, inclusive workforce should be a priority from day one,” Facebook Chief Diversity Officer Maxine Williams wrote in a blog post. “The later you start taking deliberate action to increase diversity, the harder it becomes.”
Anyway, worldwide, Facebook is 36 percent female, up from 31 percent in 2014. In the U.S., Facebook is 3.5 percent black, compared to just 2 percent in 2014, and 4.9 percent Latinx compared to 4 percent in 2014. White people, unsurprisingly, still makes up the single largest population of employees (46.4 percent today versus 57 percent in 2014). The upside to this is that white people no longer make up the majority at Facebook.
At the leadership and technical levels, change has not occurred for black employees. Black employees still make up just two percent of people in leadership roles and one percent of employees in technical roles. For Latinx people, employees make up three percent of the technical team and three percent of the leadership team, down from four percent in 2014.
In her blog post, Williams noted that “diversity is critical to our success as a company.”
It’s true and the data is there to back it up. Companies in the top quartile for ethnic diversity at the executive level are 33 percent more likely to have above-average profitability than companies in the bottom quartile, according to McKinsey’s report, “Delivering through Diversity.”
When the Cambridge Analytica scandal went down, some pointed to Facebook’s overall lack of diversity as part of the problem. That’s because homogenous cultures lead to limited perspectives and potential lack of awareness of things that may be more obvious to diverse groups of people. Perhaps if Facebook had been more diverse, that all fiasco could’ve been prevented.
You can check out Facebook’s full report here.
Back in April, Facebook announced it would be working with a group of academics to establish an independent research commission to look into issues of social and political significance using the company’s own extensive data collection. That commission just came out of stealth; it’s called Social Science One, and its first project will have researchers analyzing about a petabyte’s worth of sharing data and metadata.
The way the commission works is basically that a group of academics is created and given full access to the processes and data sets that Facebook could potentially provide. They identify and help design interesting sets based on their experience as researchers themselves, then document them publicly — for instance, a set (imaginary for now) may be described 10 million status updates taken during the week of the Brexit vote, with such and such metadata included.
This documentation describing the set doubles as a “request for proposals” from the research community. Other researchers interested in the data propose analyses or experiments, which are evaluated by commission. These proposals will be peer-reviewed with help from the Social Science Research Council. If a proposal has merit, it may be awarded funding, data, and other benefits; resulting papers can be published however the researchers wish, with no restrictions like pre-approval by Facebook or the commission.
“The data collected by private companies has vast potential to help social scientists understand and solve society’s greatest challenges. But until now that data has typically been unavailable for academic research,” said Social Science One co-founder, Harvard’s Gary King, in a blog post announcing the initiative. “Social Science One has established an ethical structure for marshaling privacy preserving industry data for the greater social good while ensuring full academic publishing freedom.”
If you’re curious about the specifics of the partnership, it’s actually been described in a paper of its own, available here. Nate Persily is the other co-chair; he and King were selected by Facebook and the foundations funding the project (listed below), who then selected the other scholars in the group.
The first data set is a juicy one: “almost all” public URLs shared and clicked by Facebook users globally, accompanied by a host of useful metadata.
It will contain “on the order of 2 million unique URLs shared in 300 million posts, per week,” reads a document describing the set. “We estimate that the data will contain on the order of 30 billion rows, translating to an effective raw size on the order of a petabyte.”
The metadata includes country, user age, device and so on, but also dozens of other items, such as “ideological affiliation bucket,” the proportion of friends versus non-friends who viewed a post, feed position, the number of total shares, clicks, likes, hearts, flags… there’s going to be quite a lot to sort through. Naturally all this is carefully pruned to protect user privacy — this is a proper research data set, not a Cambridge Analytica-style catch-all siphoned from the service.
In a call accompanying the announcement, King explained that the commission had much more data coming down the pipeline, with a focus on disinformation, polarization, election integrity, political advertising and civic engagement.
“It really does get at some of the fundamental questions of social media and democracy,” King said on the call.
The other sets are in various stages of completeness or permission: post-election survey participants in Mexico and elsewhere are being asked if their responses can be connected with their Facebook profiles; the political ad archive will be formally made available; they’re working on something with CrowdTangle; there are various partnerships with other researchers and institutions around the world.
A “continuous feed of all public posts on Facebook and Instagram” and “a large random sample of Facebook newsfeeds” are also under consideration, probably encountering serious scrutiny and caveats from the company.
Of course, quality research must be paid for, and it would be irresponsible not to note that the grants being disbursed by Social Science One are funded not by Facebook but by a number of foundations: the Laura and John Arnold Foundation, The Democracy Fund, The William and Flora Hewlett Foundation, The John S. and James L. Knight Foundation, The Charles Koch Foundation, Omidyar Network’s Tech and Society Solutions Lab and The Alfred P. Sloan Foundation.
To be clear (you can never be too clear when funding is involved), the foundations put their money into SSRC’s Social Data Initiative, from which shared fund it is then distributed both to cover Social Science One’s operations and the grants. Facebook, everyone involved in this repeatedly told me, is out of the loop except for having helped pick the co-chairs at the beginning. That independence is critical, of course, if anyone is to trust the resulting research.
You can keep up with the organization’s work here; it really is a promising endeavor and will almost certainly produce some interesting science — though not for some time. We’ll keep an eye out for any research emerging from the partnership.
Update: The original headline described the dataset as “user data,” which I don’t think is inaccurate, but the organization’s suggested description of it as “URL data” is, I think, inadequate. I’ve settled for “user interactions,” since that’s more what the dataset is focused on anyway. I also made some slight changes to reflect that the SSRC reviews the proposals, not the papers, and to add the selection process for the co-chairs and other academics.
Facebook is getting guide bots to help tourist bots explore Hell’s Kitchen in a virtual New York City. It’s not just for kicks, either; a new research paper published today by FAIR is looking to examine how AI systems can orient themselves and communicate observed data better than humans can.
The setup for Facebook’s “Talk the Walk” research experiment involves throwing a “tourist” bot onto a random street corner of NYC and getting a “guide” bot to direct them to a spot on a 2D map. This involved Facebook capturing 360-degree photos of a bunch of different street corners in random spots in NY and feeding them to the AI tourist bot, which then had to peer around at the behest of the guide agent that would then gain a sense of where the tourist was based and try to direct it through a text conversation.
It’s indeed quite the novel experiment, which plays out like this in practice:
Guide: Hello, what are you near?Tourist: Hello, in front of me is a Brooks BrothersGuide: Is that a shop or restaurant?Tourist: It is a clothing shop.Guide: You need to go to the intersection in the northwest corner of the mapTourist: There appears to be a bank behind me.Guide: Ok, turn left then go straight up that road...
Facebook isn’t doing all of this to give you a virtual guide in some unannounced mapping product, this is Facebook AI Research as opposed to their applied machine learning arm, so this stuff really resides in the long-term, less product-centric sphere. What this experiment is helping Facebook’s AI researchers approach is a concept called “Embodied AI.”
Embodied AI basically entails giving AI models the ability to learn while on-the-go gathering data that is present around them that can help them make sense of what they already know. In “Talk the Walk,” the guide AI bot had all of this 2D map data and the tourist bot had all of this rich 360 visual data, but it was only through communication with each other that they were able to carry out their directives.
The real goal was to work on the two agents gathering information through natural language, but the researchers found that the bots did a better job of completing the task when they used “synthetic language,” which relied more on them using more simplistic symbols to convey information and location. This less natural way of communicating data not only outperformed a more human-like chat, it also enabled the bots to find their way more concisely than humans would in a natural language chat.
What made this environment particularly difficult was the fact that it was the real world. The 360 snapshots were, of course, much more cluttered than what would appear in the simulated models in which a lot of these experiments would typically run. Putting this into words is hard enough when two humans are already vaguely familiar with a location; for two bots that have access to different data, this can be awfully difficult to communicate efficiently.
To tackle this, Facebook built a mechanism called MASC (Masked Attention for Spatial Convolution) that basically enables these language models the agents are running to quickly parse what the keywords are in responses that are probably the most critical to this experiment for getting a sense of what’s being conveyed. Facebook said that utilizing the process doubled the accuracy of results that were being tested.
For Facebook’s part, this is foundational research in that it seems to raise far more questions than it seeks to answer about best practices, but even grasping at those is an important milestone and a good direction to point the broader community in taking on hard problems that need to be tackled the company’s researchers say.
“If you really want to solve all of AI, then you probably want to have these different modules or components that solve different subproblems,” Facebook AI research scientist Douwe Kiela told me. “In that sense this is really a challenge to the community asking people how they would solve this and inviting them to work with us in this exciting new research direction.”
The UK’s privacy watchdog revealed yesterday that it intends to fine Facebook the maximum possible (
Facebook is giving advertisers new ways to show off their products, including with augmented reality.
At its F8 developer conference earlier this year, Facebook announced that it was working with businesses to use AR to show off products in Messenger. Now a similar experience will start appearing in the News Feed, with a select group of advertisers testing out AR ads.
Ty Ahmad-Taylor, vice president of product marketing for Facebook’s global marketing solutions, showed off ads that incorporated his face into Candy Crush gameplay footage, and other ads that allowed shoppers to see how virtual sunglasses and makeup would look on their own faces.
“People traditionally have to go into stores to do this,” Ahmad-Taylor said. “People still really love that experience, but they would like to try it at home” — so this “bridges the gap.”
These ads look like normal in-feed ads at first, but they include a “Tap to try it on” option, which opens up the AR capabilities. And of course if you like the way it looks in AR, you can go ahead and buy the product.
Facebook says Michael Kors was the first brand to test out AR ads in the News Feed, with Sephora, NYX Professional Makeup, Bobbi Brown, Pottery Barn, Wayfair and King planning their own tests for later this summer.
Ahmad-Taylor made the announcement this morning at a New York City event for journalists and marketers highlighting Facebook’s advertising plans for the holidays.
In addition, he announced a new Video Creation Kit, which will allow advertisers to incorporate existing images into templates for mobile video ads. According to weight loss company Noom, which has been testing out these tools, the resulting videos performed 77 percent better than the static images.
Lastly, Facebook says it will continue to expand its support for shopping in Instagram Stories. It made shopping tags available to select brands in Stories last month, and for the holidays, it plans to roll that out to all brands that have enabled shopping in Instagram. It’s also making its collections ad format available to all advertisers.
As Twitter finally gets serious about purging fake accounts, and YouTube says it will try to firefight conspiracy theories and fake news flaming across its platform with $25M to fund bona fide journalism, Facebook-owned WhatsApp is grappling with its own fake demons in India, where social media platforms have been used to seed and spread false rumors — fueling mob violence and leading to number of deaths in recent years.
This week Facebook has taken out full-page WhatsApp -branded adverts in Indian newspapers to try to stem the tide of life-threatening digital fakes spreading across social media platforms in the region with such tragic results.
It’s not the first time the company has run newspaper ads warning about fake news in India, though it does appear to be first time it’s responded to the violence being sparked by fakes spreading on WhatsApp specifically.
The full-page WhatsApp anti-fakes advert also informs users that “starting this week” the platform is rolling out a new feature that will allow users to determine whether a message has been forwarded. “Double check the facts when you’re not sure who wrote the original message,” it warns.
This follows tests WhatsApp was running back in January when the platform trialed displaying notifications for when a message had been forwarded many times.
Evidently WhatsApp has decided to take that feature forward, at least in India, although how effective a check it will be on technology-accelerated fakes that are likely also fueled by local prejudices remains to be seen.
Trying to teach nuanced critical thinking when there may be a more basic lack of education that’s contributing to fomenting mistrust and driving credulity, as well as causing the spread of malicious fakes and rumors targeting certain people or segments of the population in the first place, risks both being ineffectual and coming across as merely irresponsible fiddling around the edges of a grave problem that’s claimed multiple lives already.
Facebook also stands accused of failing to respond quickly enough to similar risks in Myanmar — where the UN recently warned that its platform was being weaponized to spread hate speech and used as a tool to fuel ethnic violence.
Reuters reports that the first batch of WhatsApp fake ads are running in “key Indian newspapers”, and images posted to Twitter show an English-language full-page advert — so you do have to question who these first ads are really intended to influence.
But the news agency reports that Facebook also intends to publish similar ads in regional dailies across India over the course of this week.
We’ve reached out to WhatsApp with questions and will update this story with any response.
“We are starting an education campaign in India on how to spot fake news and rumours,” a WhatsApp spokesman told Reuters in a statement. “Our first step is placing newspaper advertisements in English and Hindi and several other languages. We will build on these efforts.”
The quasi-educational WhatsApp fake news advert warns users about “false information”, offering ten tips to spot fakes — many of which boil down to ‘check other sources’ to try to verify whether what you’ve been sent is true.
Another tip urges WhatsApp users to “question information that upsets you” and, if they do read something that makes them “angry or afraid”, to “think twice before sharing it again”.
“If you are not sure of the source or concerned that the information may be untrue, think twice before sharing,” reads another tip.
The last tip warns that “fake news often goes viral” — warning: “Just because a message is shared many times, does not make it true.”
In recent times, Facebook has also run full-page ads in newspapers to apologize for failing to safeguard user data in the wake of the Cambridge Analytica scandal, and taken out print adverts ahead of European elections to warn against attempts to spread fake news to try to meddle with democratic processes.
Before Snapchat made social media about just today, Facebook made it about forever. The 2011 “Timeline” redesign of the profile and keyword search unlocked your past, encouraging you to curate colorful posts about your life’s top moments. That was actually an inspiration for Snapchat, as its CEO Evan Spiegel wrote in its IPO announcement that “We learned that creativity can be suppressed by the fear of permanence.”
Now Facebook is finding a middle ground by optionally unlocking the history of your Stories that otherwise disappear after 24 hours. Facebook will soon begin testing Stories Highlights, the company confirmed to TechCrunch. Similar to Instagram Stories Highlights, it will let you pick your favorite expired photos and videos, compile them into themed collections with titles and cover images and display them on your profile.
The change further differentiates Facebook Stories from the Snapchat Stories feature it copied. It’s smart for Facebook, because highly compelling content was disintegrating each day, dragging potential ad views to the grave with it. And for its 150 million daily users, it could make the time we spend obsessing over social media Stories a wiser investment. If you’re going to interrupt special moments to capture them with your phone, the best ones should still pay dividends of self-expression and community connection beyond a day later.
Facebook Stories Highlights was first spotted by frequent TechCrunch tipster Jane Manchun Wong, who specializes in generating screenshots of unreleased features out of the APK files of Android apps. TechCrunch inquired about the feature, and a Facebook spokesperson provided this statement: “People have told us they want a way to highlight and save the Stories that matter most to them. We’ll soon start testing highlights on Facebook – a way to choose Stories to stay on your profile, making it easier to express who you are through memories.”
These Highlights will appear on a horizontal scroll bar on your profile, and you’ll be able to see how many people viewed them just like with your Stories. They’ll default to being viewable by all your friends, but you can also restrict Highlights to certain people or make them public. The latter could be useful for public figures trying to build an audience, or anyone who thinks their identity is better revealed through their commentary on the world that Stories’ creative tools offer, opposed to some canned selfies and profile pics.
Facebook paved the way for Highlights by launching the Stories Archive in May. This automatically backs up your Stories privately to your profile so you don’t have to keep the saved versions on your phone, wasting storage space. That Archive is the basis for being able to choose dead Stories to show off in your Highlights. Together, they’ll encourage users to shoot silly, off-the-cuff content without that “fear of permanence,” but instead with the opportunity. If you want to spend a half hour decorating a Facebook Story with stickers and drawing and captions and augmented reality, you know it won’t be in vain.
While many relentlessly criticize Facebook for stealing the Stories from Snapchat, its rapid iteration and innovation on the format means the two companies’ versions are sharply diverging. Snapchat still lacks a Highlights-esque feature despite launching its Archive-style Memories back in July 2016. Instead of enhancing the core Stories product that made the app a teen phenomenon, it’s concentrated on Maps, gaming, Search, professional Discover content, and a disastrously needless redesign.
Facebook’s family of apps seized on the stagnation of Snapchat Stories and its neglect of the international market. It copied whatever was working while developing new features like Instagram’s Superzoom and Focus portrait mode, the ability to reshare public feed posts as quote tweet-style Stories and the addition of licensed music soundtracks. While writing this article, I even discovered a new Facebook Stories option called Blur that lets you shroud a moving subject with a dream-like haze, as demonstrated with my dumb face here.
The relentless drive to add new options and smooth out performance has paid off. Now Instagram has 400 million daily Stories users, WhatsApp has 450 million and Facebook has 150 million, while Snapchat’s whole app has just 191 million. As Instagram CEO Kevin Systrom admitted about Snapchat, “They deserve all the credit.” Still, it hasn’t had a megahit since Stories and AR puppy masks. The company’s zeal for inventing new ways to socialize is admirable, though not always a sound business strategy.
At first, the Stories war was a race, to copy functionality and invade new markets. Instagram and now Facebook making ephemerality optional for their Stories signals a second phase of the war. The core idea of broadcasting content that disappears after a day has become commoditized and institutionalized. Now the winner will be declared not as who invented Stories, but who perfected them.
It was already known that Facebook had set up a group within the company to “explore” blockchain tech, headed up by long time Messenger chief David Marcus. However, the latest executive reshuffle appears to point to the social networking behemoth getting more serious about developing on top of blockchain technology.
According to his LinkedIn profile, Evan Cheng, a director of engineering at Facebook, has moved to the position of Director of Engineering, Blockchain. A well-respected “low level” computer engineer, he was previously responsible for heading up Programming Languages & Runtimes at the company, a position he held for nearly three years.
Prior to that, Cheng spent nearly ten years working at Apple, most recently holding the position of Senior Manager, Low Level Tools. He also worked on compilation technology and other back end engineering.
He also tweets about blockchain and is reportedly an advisor to a number of blockchain startups/projects, including Zilliqa and ChainLink.
“It means it’s not just an exploratory project,” is how one source who tracks the blockchain space speculatively framed Cheng’s move to Facebook’s blockchain team. His reasoning was that in recruiting Cheng (who knows more than a thing or two about performance and scalability) to the blockchain group, it signals the importance of the project.
Meanwhile, Marcus and Cheng aren’t the only Facebook execs to be have been tasked with building out the social network’s burgeoning blockchain work. In a recent executive reshuffle, we reported that Instagram’s former VP of Product Kevin Weil has taken up the position of VP of Product, Blockchain at Facebook. See TechCrunch’s in-depth analysis of those moves and how Facebook could utilise blockchain.
Update: Facebook has confirmed Evan Cheng’s new position as Director of Engineering, Blockchain.
Technology is the proverbial double-edged sword. And an experimental European research project is ensuring this axiom cuts very close to the industry’s bone indeed by applying machine learning technology to critically sift big tech’s privacy policies — to see whether AI can automatically identify violations of data protection law.
They’ve also now got support from European consumer organization BEUC — for a ‘Claudette meets GDPR‘ project — which specifically applies the tool to evaluate compliance with the EU’s General Data Protection Regulation.
Early results from this project have been released today, with BEUC saying the AI was able to automatically flag a range of problems with the language being used in tech T&Cs.
The researchers set Claudette to work analyzing the privacy policies of 14 companies in all — namely: Google, Facebook (and Instagram), Amazon, Apple, Microsoft, WhatsApp, Twitter, Uber, AirBnB, Booking, Skyscanner, Netflix, Steam and Epic Games — saying this group was selected to cover a range of online services and sectors.
The AI analysis of the policies was carried out in June, after the update to the EU’s data protection rules had come into force. The regulation tightens requirements on obtaining consent for processing citizens’ personal data by, for example, increasing transparency requirements — basically requiring that privacy policies be written in clear and intelligible language, explaining exactly how the data will be used, in order that people can make a genuine, informed choice to consent (or not consent).
In theory, all 15 parsed privacy policies should have been compliant with GDPR by June, as it came into force on May 25. However some tech giants are already facing legal challenges to their interpretation of ‘consent’. And it’s fair to say the law has not vanquished the tech industry’s fuzzy language and logic overnight. Where user privacy is concerned, old, ugly habits die hard, clearly.
But that’s where BEUC is hoping AI technology can help.
It says that out of a combined 3,659 sentences (80,398 words) Claudette marked 401 sentences (11.0%) as containing unclear language, and 1,240 (33.9%) containing “potentially problematic” clauses or clauses providing “insufficient” information.
BEUC says identified problems include:
The bolstering of the EU’s privacy rules, with GDPR tightening the consent screw and supersizing penalties for violations, was exactly intended to prevent this kind of stuff. So it’s pretty depressing — though hardly surprising — to see the same, ugly T&C tricks continuing to be used to try to sneak consent by keeping users in the dark.
At the time of writing Facebook had not responded to our request for comment. Update: After publication, a company spokesperson sent this statement: “We have worked hard to ensure we meet the requirements of the GDPR, making our policies clearer, our privacy settings easier to find and introducing better tools for people to access, download, and delete their information. We sought input from privacy experts and regulators across Europe as part of these preparations, including our lead regulator the Irish DPC.
“Our work to improve people’s privacy didn’t stop on May 25. For example, we’re building Clear History; a way for everyone to see the websites and apps that send us information when you use them, remove this information from your account, and turn off our ability to store it.”
Commenting in a statement, Monique Goyens, BEUC’s director general, said: “A little over a month after the GDPR became applicable, many privacy policies may not meet the standard of the law. This is very concerning. It is key that enforcement authorities take a close look at this.”
The group says it will be sharing the research with EU data protection authorities, including the European Data Protection Board. And is not itself ruling out bringing legal actions against law benders.
But it’s also hopeful that automation will — over the longer term — help civil society keep big tech in legal check.
Although, where this project is concerned, it also notes that the training data-set was small — conceding that Claudette’s results were not 100% accurate — and says more privacy policies would need to be manually analyzed before policy analysis can be fully conducted by machines alone.
So file this one under ‘promising research’.
“This innovative research demonstrates that just as Artificial Intelligence and automated decision-making will be the future for companies from all kinds of sectors, AI can also be used to keep companies in check and ensure people’s rights are respected,” adds Goyens. “We are confident AI will be an asset for consumer groups to monitor the market and ensure infringements do not go unnoticed.
“We expect companies to respect consumers’ privacy and the new data protection rights. In the future, Artificial Intelligence will help identify infringements quickly and on a massive scale, making it easier to start legal actions as a result.”
Save $150 on ALL Home and Home Office Laptops and Desktops when you spend over $1699.99 with coupon code: $150OFF$1700PC
Start: 01 May 2017 | End: 01 Mar 2018