All Hot News Popular Media
Facebook «TechCrunch»

Tue, 18 Sep 2018 14:43:41 +0000
Early-bird sale ends this Friday for TC Sessions: AR/VR in Los Angeles
After Friday, September 21, ticket prices for TC Sessions: AR/VR will jump $100 for the October 18 event hosted at UCLA. Buy your early-bird $99 tickets today before these savings fly the coop!

Facebook’s ad platform is facing charges that it has enabled gender-based discrimination against millions of women in a class action suit filed on behalf of three female workers and backed by the American Civil Liberties Union (ACLU).

The legal action also names ten employers who are alleged to have used the social media giant’s platform to exclusively and unlawfully target job adverts at male Facebook users, thereby excluding women and non-binary users from receiving the ads.

The ACLU, law firm Outten & Golden LLP, and the Communications Workers of America have filed charges with the Equal Employment Opportunity Commission.

The 10 employers and employment agency advertisers named in the suit, which the charges allege ran discriminatory jobs in “mostly” male-dominated fields, include a police department, multiple retailers, a software development firm and various installation, repair and remodelling companies. (All ten named in the suit are listed in the ACLU’s press release.)

“I’ve heard stories about when people looked for jobs in the classified ads and big bold letters read ‘help wanted-male’ or ‘help wanted-female.’ I was shocked to find that this discrimination is still happening, just online instead of in newspapers,” said Bobbi Spees, a job-seeker and lead complainant in the case, commenting in a statement.  “I shouldn’t be shut out of the chance to hear about a job opportunity just because I am a woman.”

“The internet did not erase our civil rights laws.  It violates the law if an employer uses Facebook to deny job ads to women,” added Peter Romer-Friedman, an attorney at Outten & Golden, in another supporting statement. “The last time I checked, you don’t have to be a man to be a truck driver or a police officer.  But Facebook and employers are acting like it’s the 1950s, before federal employment law banned sex discrimination.”

The charges allege that Facebook, via its platform, delivers job ads selectively based on age and sex categories that employers expressly choose, and that it earns revenue from placing job ads that exclude women and older workers from receiving the ads.

The ACLU notes that targeting job ads by sex is unlawful under federal, state, and local civil rights laws, including Title VII of the Civil Rights Act of 1964.

“Sex segregated job advertising has historically been used to shut women out of well-paying jobs and economic opportunities,” said Galen Sherwin, senior staff attorney at the ACLU Women’s Rights Project, in another supporting statement. “We can’t let gender-based ad targeting online give new life to a form of discrimination that should have been eradicated long ago.”

While online platforms are not as heavily regulated as publishing platforms the lawsuit argues that Facebook can be held legally responsible for:

  1. creating and operating the system that allows and encourages employers to select the gender and age of the people who get their job ads, including providing employers with data on users’ gender and age for targeting purposes;
  2. delivering the gender- and age-based ads based on employers’ preferences; and
  3. acting as a recruiter connecting employers with prospective employees

We’ve reached out to Facebook for comment on the lawsuit.

It’s by no means the first time the company has faced civil rights complaints related to its ad platform.

Back in 2016 ProPublica exposed how Facebook’s ad tools could be used to exclude users based on their “ethnic affinity” — including in protected categories such as housing, employment and credit opportunities which prohibit discriminatory advertising.

The company responded by saying it would build tools to prevent advertisers from applying ethnic affinity targeting in the protected categories. And also by rewording its ad policies to more clearly prohibit discrimination.

But the following year another ProPublica investigation showed it was still failing to block discriminatory ads — leaving Facebook to apologize for failing to effectively enforce its own policies (hmmm, now where else have we heard the company accused of that… ), and saying: “Our systems continue to improve but we can do better.”

Last year the company was also shown to have allowed ads that included hateful sentiments targeted at Jewish people.

Around about the same time that Facebook was facing renewed criticism over ethnic affinity targeting on its platform being used as a tool for racial discrimination, the company said it would also take a look at how advertisers are using exclusion targeting across other “sensitive segments” — such as those relating to members of the LGBTQ community and people with disabilities.

It’s not clear whether Facebook included gender-based discrimination in those 2017 self reviews too. (We’ve asked and will update this post with any response.)

Either way, it appears Facebook has failed to pick up on the potential for gender-based discrimination to be carried out via its ad platform.

And given all the attention its ad tools have attracted lately as a vector for discrimination and other types of abuse that looks careless to say the least.

Facebook’s ad platform has faced additional criticism in Europe for sensitive inferences it makes about users — given the platform allows advertisers to target people based on political and religious interests, meaning Facebook’s platform is quietly making sensitive inferences about individuals.

Privacy experts argue this modus operandi entails Facebook processing the sensitive personal data of individuals without explicitly asking people for their upfront consent (as would be required under EU law when you’re processing sensitive personal data such as political or religious affiliation).

An opinion on a person is still personal data of that person, they contend.

Facebook disagrees, disputing that the inferences its ad platform makes about users (based off of its tracking and data-mining of people) constitutes personal data. But it’s yet another bone of legal contention now being lobbed at the company.


Mon, 17 Sep 2018 23:00:42 +0000
Facebook named in suit alleging job ads on its platform unlawfully discriminated against women
Facebook’s ad platform is facing charges that it has enabled gender-based discrimination against millions of women in a class action suit filed on behalf of three female workers and backed by the American Civil Liberties Union (ACLU). The legal action also names ten employers who are alleged to have used the social media giant’s platform […]

A wide-ranging new poll yields some useful insight into how worried the average American feels about election threats as the country barrels toward midterms.

The survey, conducted by NPR and researchers with Marist College, polled 949 adult U.S. residents in early September across regions of the country, contacting participants through both landlines and mobile devices. The results are a significant glimpse into current attitudes around the likelihood of foreign election interference, election security measures and how well social media companies have rebounded in the public eye.

Attitudes toward Facebook and Twitter

As the most recent dust settles around revelations that Russia ran influence campaigns targeting Americans on social media platforms, just how much do U.S. voters trust that Facebook and Twitter have cleaned up their acts? Well, they’re not convinced yet.

In response to a question asking about how much those companies had done since 2016 “to make sure there is no interference from a foreign country” in the U.S. midterm elections, 24 percent of respondents believed that Facebook had done either “a great deal” or “a good amount,” while 62 percent believed the company had done “not very much” or “nothing at all.”

When asked the same question about Twitter, only 19 percent thought that the company had made significant efforts, while 57 percent didn’t think the company had done much. Unlike nearly every other question in the broad-ranging survey, answers to this set of questions didn’t show a divide between Republicans and Democrats, making it clear that in 2018, disdain for social media companies is a rare bipartisan position.

When it comes to believing what they read on Facebook, only 12 percent of voters had “a great deal” or “quite a lot” of confidence that content on the platform is true, while 79 percent expressed “not very much confidence” or none at all. Still, those numbers have perked up slightly from polling in 2018 that saw only 4 percent of those polled stating that they were confident in the veracity of content they encountered on Facebook.

Midterm perspectives

In response to the question “Do you think the U.S. is very prepared, prepared, not very prepared or not prepared at all to keep this fall’s midterm elections safe and secure?,” 53 percent of respondents felt that the U.S. is prepared while 39 percent believed that it is “not very prepared” or not prepared at all. Predictably, this question broke down along party lines, with 36 percent of Democrats and 74 percent of Republicans falling into the “prepared” camp (51 percent of independents felt the U.S. is prepared).

An impressive 69 percent of voters believed that it was either very likely or likely that Russia would continue to “use social media to spread false information about candidates running for office” during the midterm elections, suggested that voters are moving into election season with a very skeptical eye turned toward the platforms they once trusted.

When it came to hacking proper, 41 percent of respondents believed that it was very likely or likely that “a foreign country will hack into voter lists to cause confusion” over who can vote during midterm elections, while 55 percent of respondents said that hacked voter lists would be not very likely or not at all likely. A smaller but still quite significant 30 percent of those polled believed that it was likely or very likely that a foreign country would “tamper with the votes cast to change the results” of midterm elections.

Election security pop-quiz

Political divides were surprisingly absent from some other questions around specific election security practices. Democrats, Republicans and independent voters all indicated that they had greater confidence in state and local officials to “protect the actual results” of the elections and trusted federal officials less, even as the Department of Homeland Security takes a more active role in providing resources to protect state and local elections.

A few of the questions had a right answer, and happily most respondents did get a big one right. Overall, 55 percent of voters polled said that electronic voting systems made U.S. elections less safe from “interference or fraud” — a position largely backed by election security experts who advocate for low-tech options and paper trails over vulnerable digital systems. Only 31 percent of Democrats wrongly believed that electronic systems were safer, though 49 percent of Republicans trusted electronic systems more.

When the question was framed a different (and clearer) way, the results were overwhelmingly in favor of paper ballots — a solution that experts widely agree would significantly secure elections. Indeed, 68 percent of voters thought that paper ballots would make elections “more safe” — an attitude that both Republican and Democratic Americans could get behind. Unfortunately, legislation urging states nationwide to adopt paper ballots has continued to face political obstacles in contrast to the wide support observed in the present poll.

On one last election security competence question, respondents again weighed in with the right answer. A whopping 89 percent of those polled correctly believed that online voting would be a death knell for U.S. election security — only 8 percent said, incorrectly, that connecting elections to the internet would make them more safe.

For a much more granular look at these attitudes and many others, you can peruse the poll’s full results here. For one, there’s more interesting stuff in there. For another, confidence — or the lack thereof — in U.S. voting systems could have a massive impact on voter turnout in one of the most consequential non-presidential elections the nation has ever faced.


Mon, 17 Sep 2018 22:01:58 +0000
This is what Americans think about the state of election security right now
A wide-ranging new poll yields some useful insight into how worried the average American feels about election threats as the country barrels toward midterms. The survey, conducted by NPR and researchers with Marist College, polled 949 adult U.S. residents in early September across regions of the country, contacting participants through both landlines and mobile devices. The […]

Facebook has rolled out a “pilot” program of new security tools for political campaigns — just weeks before millions of Americans go to the polls for the midterm elections.

The social networking giant said it’s targeting campaigns that “may be particularly vulnerable to targeting by hackers and foreign adversaries.”

Once enrolled, Facebook said it’ll help campaigns adopt stronger security protections, “like two-factor authentication and monitor for potential hacking threats,” said Nathaniel Gleicher, Facebook’s head of cybersecurity policy, in a Monday blog post.

Facebook’s chief Mark Zuckerberg has admitted that the company “didn’t do enough” in the 2016 presidential election to prevent meddling and spreading misinformation, yet took a lashing from lawmakers for failing to step up in the midterms.

A former Obama campaign official told TechCrunch that the offering was important — but late.

“Fifty days is an eternity in campaign time,” said Harper Reed, who served as President Obama’s chief technology officer during the 2012 re-election campaign. “At this point, if [a campaign] has made gross security problems, they’ve already made them.”

But he questioned if now equipping campaigns with security tools will “actually solve the problem, or if it just solves Facebook’s PR problem.”

Facebook — like other tech giants — has been under the microscope in recent years after the social networking giant failed to prevent foreign meddling in the 2016 presidential election, in which adversaries — typically Russia — used the platform to spread disinformation.

The company’s done more to crack down on foreign interference campaigns after facing rebuke from lawmakers.

But ahead of the midterms, even the company’s former chief security officer was critical of Facebook. In an interview at Disrupt SF, Alex Stamos said that critical steps to protect the midterms hadn’t been taken in time.

“If there’s no foreign interference during the midterms, it’s not because we did a great job. It’s because our adversaries decided to [show] a little forbearance, which is unfortunate,” said Stamos.

Facebook, for its part, said its latest rollout of security tools “might be expanded to future elections and other users” beyond the midterms.

“Hacking is a part of elections,” said Reed. But with just two months to go before voters go to the polls, campaigns “have to just keep doing what they’re doing,” he said.


Mon, 17 Sep 2018 17:55:24 +0000
Facebook pilots new political campaign security tools — just 50 days before Election Day
Facebook has rolled out a “pilot” program of new security tools for political campaigns — just weeks before millions of Americans go to the polls for the midterm elections. The social networking giant said it’s targeting campaigns that “may be particularly vulnerable to targeting by hackers and foreign adversaries.” Once enrolled, Facebook said it’ll help […]

In 2018, VR adoption has plenty of demons to chase as it looks to build a larger, more mainstream audience. In 2017, the chief concern for most in the industry was the lack of content available for headsets.

The “content problem,” as it was called, was a huge concern for headset companies like Oculus, which were selling pricey headsets with which users could blaze through the available content in a few weeks. It was a daunting challenge for the young industry, but one that no one seems to be talking about quite as much just a year later.

With VCs still reticent to invest in content and a relatively small user base, how did the industry move past the content problem? Well, a large part was Oculus‘s efforts in spending its way through the problem by investing hundreds of millions in indie developers building new and innovative VR content.

At TechCrunch Sessions: AR/VR (early-bird sale ends Friday) we’ll ask Oculus‘s Executive Producer of Experiences Yelena Rachitsky about the company’s latest strategies for investing in VR content and where they think the biggest opportunities are for VR creators.

Two years ago, Facebook CEO Mark Zuckerberg shared at the Oculus developer conference that the company had already pumped $250 million into VR content investments, pledging another $250 million to be invested thereafter. The company hasn’t delivered many other details on total funding since, but fast-forward to the present and it’s difficult to find a VR developer that hasn’t benefited from Oculus’s big investments in the space.

Oculus is still a major driver of content across gaming, but their funding efforts are even more important for studios building immersive cinematic content. While game studios have a tried and true method for monetizing users, revenue options are much less clear for small studios pumping tens or hundreds of thousands of dollars into 10-minute VR experiences.

How will these studios eventually monetize is an open question; for now the studios have a lot of very base creative questions they’re still grappling with, like learning how a modern audience will engage with a super modern technology that should theoretically enable much deeper emotional experiences. Oculus has already poured millions of dollars into these non-gaming projects as they’ve shifted away from trying to answer these questions in-house.

VR content creators have learned quite a lot in the last several years about the craft of building immersive content for headsets — a lot about what works, but even more about what doesn’t. At TC Session: AR/VR, we’ll hear from Rachitsky about her first-hand experience at Oculus, helping to build up a network of studios that is pushing the medium’s potential forward year after year.

The early-bird sale ends Friday, September 21. Book your tickets today and you’ll save $100. Student tickets are just $45.


Mon, 17 Sep 2018 16:01:02 +0000
Hear how Oculus is minimizing VR’s content problem at TC Sessions: AR/VR
In 2018, VR adoption has plenty of demons to chase as it looks to build a larger, more mainstream audience. In 2017, the chief concern for most in the industry was the lack of content available for headsets. The “content problem,” as it was called, was a huge concern for headset companies like Oculus, which […]

Facebook announced this morning it’s expanding its bug bounty program – which pays researchers who find security vulnerabilities within its platform – to now include issues found in third-party apps and websites. Specifically, Facebook says it will reward valid reports of vulnerabilities that relate to the improper exposure of Facebook user access tokens.

Typically, when a user logs into another app using their Facebook account information, they’re able to decide what information the token and, therefore, the app can access and what actions it can take.

But if the token becomes compromised, users’ personal information could be misused.

Facebook says it will pay a minimum reward of $500 per vulnerable app or website, if the report is valid. The company also noted it wasn’t aware of any other programs offering rewards of this scope for all eligible third-party apps.

If a vulnerability is determined to be legit, Facebook will then work with the affected app developer or website operator to fix their code. Any apps that don’t comply with Facebook’s request to address the issue will be suspended from the platform until the problem has been solved and undergoes a security review.

In addition, Facebook says it will revoke all the access tokens that could have been compromised in order to prevent potential misuse. If it believes anyone has actually been impacted by the problem, it will notify them, if need be.

The company spells out what sort of information researchers (the white hat hackers) should include in their reports in order to receive the reward. It also says it’s only accepting reports where the bug is discovered by passively viewing data sent to and from a device and the affected app or website – not through any more of manipulation on the researchers’ part.

The news comes at a time when Facebook is still dealing with the fallout from the Cambridge Analytica scandal, which compromised the personal data from as many as 87 million Facebook users. This was followed by news this summer that a quiz app had been leaking data on 120 million users for years.

Since then, the company has been tightening its API platform, reviewing all apps, suspending hundreds of apps deemed suspicious, rolling out tools to help people better manage their apps, and more.

As a part of those changes, Facebook said earlier this year that its bug bounty program would be expanded.

Separately from this new program, the company now also runs a Data Abuse Bounty program which rewards first-hand knowledge of third-parties that collect user data in order to pass it off to malicious parties.

“We would like to emphasize that our bug bounty program does not replace the obligations on app developers to maintain appropriate technical and organizational measures to protect personal data — either regulatory obligations (for example, if the app developer is a data controller for the purposes of GDPR) or the rigorous controls we require through our terms of service and policies that apply to all developers on the Facebook platform,” wrote Dan Gurfinkel, Facebook Security Engineering Manager, in an announcement.

More details on the program are here.


Mon, 17 Sep 2018 15:05:47 +0000
Facebook expands bug bounty program to include third-party apps and websites
Facebook announced this morning it’s expanding its bug bounty program – which pays researchers who find security vulnerabilities within its platform – to now include issues found in third-party apps and websites. Specifically, Facebook says it will reward valid reports of vulnerabilities that relate to the improper exposure of Facebook user access tokens. Typically, when […]

On May 11, Netflix released the teen dramedy “The Kissing Booth” just as the school year was wrapping up for teens across the country.

By June, the company had a smash hit among the tweenage set, and Wattpad, the company that owned the rights to the The Kissing Booth, had its first true breakout vehicle. The story, written on Wattpad’s publishing platform by Beth Reekles, was a proof point for the company’s thesis pitching a new twist on the old model of discovering stories and creative talent for the entertainment industry.

Behind the success of the film is a nascent movement among startup companies that are trying to open the doors of Hollywood’s dream factory to a broader group of creative professionals by riding the wave of fan fiction and user-generated content all the way to the Paramount lot (or the Disney lot, or Sony Studios).  

“In this obvious period of disruption in the entertainment industry, how we’re finding stories is evolving,” said Wattpad Studios chief Aron Levitz.

YouTube, the short-lived Vine app and Instagram have all created new platforms for discovering potential on-camera talent, and Amazon, Apple, Facebook, Instagram (again), Netflix and YouTube (again) have smashed the distribution system for television and movies. But these platforms and the traditional studios they’d like to supplant have a voracious appetite for stories to tell and (many) are reluctant to risk millions of dollars behind something unproven.

Hollywood has always borrowed (or stolen) from other media to entertain the masses, but it seems like the fields it’s foraging in for new stories have narrowed to a few serialized playgrounds (comic books, old television shows and movies and wildly successful young-adult genre fiction).

While there are thousands of flowers to be found there, new tech-enabled companies are suggesting there might be other patches where new talent can be discovered, harvested and leveraged for corporate gain and viewer delight.

Startups like Wattpad and Tongal (for directors and cinematographers), and new financing platforms like Legion M (for producing features), are aiming to elevate new talent and provide what the companies hope will be built-in audiences for successful new programming on platforms like Netflix, Apple and others — and the hundreds of networks that are vying for attention in an increasingly fragmented media landscape.

It wasn’t always this way. When Tongal was created, roughly a decade ago, the entertainment industry looked much, much different than it does now.

Ten years ago Netflix announced it would let its DVD subscribers watch streaming video, as well — mostly old movies and syndicated shows that had already made their millions for the big networks and studios. That was the starting gun of what would become a race to roll up talent and gain audience in a creative landscape that was becoming increasingly competitive — with new entrants joining at every new lap.

At the time, Tongal was a discovery mechanism for new talent and a way for brands to pay for user-generated content they liked. The company raised $15 million from Insight Venture Partners to harness the growing popularity of social media reach to create potentially viral videos for brands.

Tongal is still working under the thesis of user-generated content, but the difference now is the millions of dollars these videos and their creators can bring in — and the ability to energize and inspire a fan base to connect more directly and engage more frequently with new titles. All the while, Tongal gives studios a window into a wider world of talent.

One creator on the platform, Tucker Barrie, has gone from making short videos for social media for IAMS to a career as an animator on projects like Isle of Dogs. Tongal is a good spot for people who don’t have a lot of experience to gain a lot of experience and make a name for themselves,” Barrie said.

In the past year the company has inked a deal with National Geographic to produce a series called WILD After Dark. The first late-night series from National Geographic WILD, the new episodes will feature shorts from members of the Tongal platform on animal-related subjects. It launched with an open call for submissions in February.

More recently Tongal has linked up with Wattpad to call on its network of creators to pitch a treatment for Wattpad’s wildly successful science fiction thriller Expiration Date. In July, Tongal issued its call to filmmakers for submissions from which the partners will pick three finalists. Those finalists will receive funding to produce a “proof-of-concept” series trailer.

Then, Wattpad, Tongal and their distribution partner SYFY will award a grand prize winner additional funding to create a digital pilot episode with the potential to go on to develop the entire series for SYFY.com as part of its fan creators program.

“The partnership between Tongal and Wattpad flips the script on Hollywood by changing the how and who of content creation through our open platforms for talent,” said James DeJulio, Tongal’s co-founder and president, in a statement at the time. “These new global communities are made up of diverse and passionate creators, and now they’re actually developing the shows they want to watch. I’m thrilled that SYFY.com has opened the door for this innovative, by the fans, for the fans shift.”

This marks the second collaboration between Tongal and Wattpad on project development for a network. The two companies, which have a natural affinity as creative platforms focused on the visual and storytelling elements of a production (respectively), had worked on a similar competition for the CW Seed, and its production of Cupid’s Match, another popular Wattpad story (spoiler: it’s not very good).

“It’s one of those great proof points for Wattpad and Wattpad studios,” said Levitz, the head of Wattpad Studios, in a February interview. “I think it’s the first public one that we’re talking about in a strong way.”

On Wattpad, Cupid’s Match had 32 million reads, and it was that kind of viral popularity that piqued the interest of the CW Network. “We can use the strength of an audience and get someone like CW interested in the output,” Levitz said. “We have 400 million stories on the platform. We’re able to look at the data we have, the audience we have and the story we have and use data to choose the right stories for the right partner.”

Partners are lining up. Sony Pictures Television bought the rights to the Wattpad story “Death is my BFF,” and Hulu signed off on an order for “Light as a Feather.” Studios and networks including TurnerUniversal Cable Productions (a division of NBCUniversal), eOne and Paramount Pictures, have also signed on to work with the startup.

Like Tongal, Wattpad also took a circuitous path to becoming a player in Hollywoodland. The company initially started as an e-book community operator sharing fan fiction and classic works. Over time, the fan fiction side of the content marketplace won out and the Toronto-based company went from raising capital from a consortium of angel investors to raising $51 million from a consortium of investors, including the Chinese internet giant, Tencent, earlier this year. It’s likely that Tencent (and the studios it’s partnering with) were drawn to Wattpad’s 60 million monthly users.

The foundation for the belief that fan fiction could be leveraged into hundreds of millions for the movie industry was laid by the success of the Fifty Shades franchise. The best-selling books, derived from Twilight fan fiction, were optioned into a series of three films and made for a cool $150 million.

By the time the last movie in the series debuted, the films were on their way to making more than $1 billion at the box office.

For the past decade Hollywood has been relying on big franchises and fan-driven stories to create big numbers at the box office or online, said DeJulio.

“Fans are the lifeblood of these franchises,” DeJulio said. “We’re in this weird time right now… where marketing is very expensive and it is in a lot of ways hamstringing entertainment.”

DeJulio sees Tongal as a platform where one can influence and support the other.

“The studios, once they do get a hit… They realize that through fan communities and engaging them they can not only market it but they can actually get the work done too [of creating new content],” DeJulio said.

Mount Lee, Hollywood Hills, Hollywood, Los Angeles, California, USA

If Wattpad and Tongal are using their network of users to find and promote talent, Legion M is hoping to use the network of fans for genre content to finance new productions.

The startup production studio has raised $3 million in equity crowdfunding over two rounds and has managed to grab a stake in well-reviewed indie-projects like Colossal (starring Anne Hathaway and Jason Sudeikis) and Mandy, a new Nicolas Cage vehicle already being touted as cult-classic gold. What that means as far as returns go for the shareholders that back the company’s funding campaigns is unclear, especially since the company’s Bad Samaritan project (starring David Tenant, everyone’s favorite of the new Dr. Who) was critically panned.

Founded by two serial internet entrepreneurs, Paul Scanlan and Jeff Annison, and backed by partnerships with folks like the Austin-based theater chain Alamo Drafthouse, Legion M’s goal is to bring in 1 million fans as investors to back projects.

The idea is to harness fan support for sales and marketing help and to surface projects that have enough of a built-in audience to generate profits for the company.

“We believe an entertainment company owned by fans is better than one owned by Wall Street,” said Paul Scanlan, Legion M’s co-founder and CEO, in a statement announcing the company’s new crowdfunding campaign.

Some of the projects Legion M affiliated itself with are based more around fan engagement than an actual dollar investment. In fact, the company isn’t a producer of the marquee Colossal film, and instead came on to provide marketing support through its network of fans, according to an interview with the director.

Scanlan and Annison launched MobiTV, which was an early developer of technology to stream digital media on mobile devices. The two went on to launch New York Rock Exchange, a company that allows fans to buy illiquid shares in songs they love. It’s like a coin offering, without the upside, and without any legal ramifications, because there’s actually nothing of value that acquirers are buying.

Unlike the Rock Exchange, average investors are buying real shares in the crowdfunding offerings the two co-founders are selling via the Securities and Exchange Commission’s new crowdfunding regulations. And they’re tapping into the thesis that fans and consumers are driving the creation of commercially viable content now more than ever.

Wattpad, Tongal and Legion M aren’t alone in their efforts. Companies like Seed&Spark, Coverfly and The Black List are also doing their best to uncover new artists and creators for the entertainment industry to develop. While on the financing side, new cryptocurrencies like MovieCoin (which just launched a pre-sale of its tokenized financing offering for producing new movies) and TaTaTu are angling to give the moviegoing public another (ideally more transparent) way to finance movies.

“Hollywood is a notoriously difficult place to traverse in the entertainment business. What we find in content creation, and the investment process as well, is that every project is seeking an audience,” Annison said in an interview with The Niner Times (the local university paper for the University of North Carolina, Charlotte). “Among Hollywood, which is such a massive world to step into, there are limitations along with those huge companies. In essence, it’s a ‘hit-driven’ enterprise, where the lines are drawn between the artistic side of filmmaking and the business side of entertainment. That can be a complicated street to walk down.”


Mon, 17 Sep 2018 07:52:18 +0000
Startups are giving writers and filmmakers more ways to make it in Hollywood
On May 11, Netflix released the teen dramedy “The Kissing Booth” just as the school year was wrapping up for teens across the country. By June, the company had a smash hit among the tweenage set, and Wattpad, the company that owned the rights to the The Kissing Booth, had its first true breakout vehicle. The […]

It’s that time again, folks, time to say goodbye to a social media service from days past.

Following the shuttering of Klout earlier this year, now Path, the one-time rival to Facebook, is closing its doors, according to an announcement made today. (Yes, you may be surprised to learn that Path was still alive.)

The eight-year-old service will close down in one month — October 18 — but it will be removed from the App Store and Google Play on October 1. Any remaining users have until October 18 to download a copy of their data, which can be done here.

Path was founded by former Facebook product manager Dave Morin, and ex-Napster duo Dustin Mierau and Shawn Fanning . The company burst onto the scene in 2010 with a mobile social networking app that was visually pleasing and — importantly — limited to just 50 friends per user. That positioned it as a more private alternative to Facebook with some additional design bells and whistles, although the friend restriction was moved to 150 and then 500.

At its peak, the service had around 50 million users and it was once raising money at a valuation of $500 million. Indeed, Google tried to buy it for $100 million when it was just months old. The startup has raised roughly $70 million from investors that included top Silicon Valley names like Index, Kleiner Perkins and Redpoint.

Facebook ultimately defeated Path, but it stole a number of features from its smaller rival

But looks fade, and social media is a tough place when you’re not Facebook, which today has over 1.5 billion active users and aggressively ‘borrowed’ elements from Path’s design back in the day.

Path’s road took a turn for the worse and the much-hyped startup lost staff, users and momentum (and user data). The company tried to launch a separate app to connected businesses and users — Path Talk — but that didn’t work and ultimately it was sold to Korea’s Kakao — a messaging and internet giant — in an undisclosed deal in 2015. Kakao bought the app because it was popular in Indonesia, the world’s fourth-largest population where Path had four million users, and the Korean firm was making a major play for that market, which is Southeast Asia’s largest economy and a growing market for internet users.

However, Path hasn’t kicked on in the last three years and now Kakao is discarding it altogether.

“It is with deep regret that we announce that we will stop providing our beloved service, Path. We started Path in 2010 as a small team of passionate and experienced designers and engineers. Over the years we have tried to lay out our mission: through technology and design we aim to be a source of happiness, meaning, and connection to our users,” the company said in a statement.

Thanks Aulia

Correction: Post updated to correct number of peak users, amount raised and friend restriction. 

Sun, 16 Sep 2018 10:24:37 +0000
Mobile social network Path, once a challenger to Facebook, is closing down
It’s that time again, folks, time to say goodbye to a social media service from days past. Following the shuttering of Klout earlier this year, now Path, the one-time rival to Facebook, is closing its doors, according to an announcement made today. (Yes, you may be surprised to learn that Path was still alive.) The […]

Facebook is advertising for a human rights policy director to join its business, located either at its Menlo Park HQ or in Washington DC — with “conflict prevention” and “peace-building” among the listed responsibilities.

In the job ad, Facebook writes that as the reach and impact of its various products continues to grow “so does the responsibility we have to respect the individual and human rights of the members of our diverse global community”, saying it’s:

… looking for a Director of Human Rights Policy to coordinate our company-wide effort to address human rights abuses, including by both state and non-state actors. This role will be responsible for: (1) Working with product teams to ensure that Facebook is a positive force for human rights and apply the lessons we learn from our investigations, (2) representing Facebook with key stakeholders in civil society, government, international institutions, and industry, (3) driving our investigations into and disruptions of human rights abusers on our platforms, and (4) crafting policies to counteract bad actors and help us ensure that we continue to operate our platforms consistent with human rights principles.

Among the minimum requirements for the role, Facebook lists experience “working in developing nations and with governments and civil society organizations around the world”.

It adds that “global travel to support our international teams is expected”.

The company has faced fierce criticism in recent years over its failure to take greater responsibility for the spread of disinformation and hate speech on its platform. Especially in international markets it has targeted for business growth via its Internet.org initiative which seeks to get more people ‘connected’ to the Internet (and thus to Facebook).

More connections means more users for Facebook’s business and growth for its shareholders. But the costs of that growth have been cast into sharp relief over the past several years as the human impact of handing millions of people lacking in digital literacy some very powerful social sharing tools — without a commensurately large investment in local education programs (or even in moderating and policing Facebook’s own platform) — has become all too clear.

In Myanmar Facebook’s tools have been used to spread hate and accelerate ethic cleansing and/or the targeting of political critics of authoritarian governments — earning the company widespread condemnation, including a rebuke from the UN earlier this year which blamed the platform for accelerating ethnic violence against Myanmar’s Muslim minority.

In the Philippines Facebook also played a pivotal role in the election of president Rodrigo Duterte — who now stands accused of plunging the country into its worst human rights crisis since the dictatorship of Ferdinand Marcos in the 1970s and 80s.

While in India the popularity of the Facebook-owned WhatsApp messaging platform has been blamed for accelerating the spread of misinformation — leading to mob violence and the deaths of several people.

Facebook famously failed even to spot mass manipulation campaigns going on in its own backyard — when in 2016 Kremlin-backed disinformation agents injected masses of anti-Clinton, pro-Trump propaganda into its platform and garnered hundreds of millions of American voters’ eyeballs at a bargain basement price.

So it’s hardly surprising the company has been equally naive in markets it understands far less. Though also hardly excusable — given all the signals it has access to.

In Myanmar, for example, local organizations that are sensitive to the cultural context repeatedly complained to Facebook that it lacked Burmese-speaking staff — complaints that apparently fell on deaf ears for the longest time.

The cost to American society of social media enabled political manipulation and increased social division is certainly very high. The costs of the weaponization of digital information in markets such as Myanmar looks incalculable.

In the Philippines Facebook also indirectly has blood on its hands — having provided services to the Duterte government to help it make more effective use of its tools. This same government is now waging a bloody ‘war on drugs’ that Human Rights Watch says has claimed the lives of around 12,000 people, including children.

Facebook’s job ad for a human rights policy director includes the pledge that “we’re just getting started” — referring to its stated mission of helping  people “build stronger communities”.

But when you consider the impact its business decisions have already had in certain corners of the world it’s hard not to read that line with a shudder.

Citing the UN Guiding Principles on Business and Human Rights (and “our commitments as a member of the Global Network Initiative”), Facebook writes that its product policy team is dedicated to “understanding the human rights impacts of our platform and to crafting policies that allow us both to act against those who would use Facebook to enable harm, stifle expression, and undermine human rights, and to support those who seek to advance rights, promote peace, and build strong communities”.

Clearly it has an awful lot of “understanding” to do on this front. And hopefully it will now move fast to understand the impact of its own platform, circa fifteen years into its great ‘society reshaping experience’, and prevent Facebook from being repeatedly used to trash human rights.

As well as representing the company in meetings with politicians, policymakers, NGOs and civil society groups, Facebook says the new human rights director will work on formulating internal policies governing user, advertiser, and developer behavior on Facebook. “This includes policies to encourage responsible online activity as well as policies that deter or mitigate the risk of human rights violations or the escalation of targeted violence,” it notes. 

The director will also work with internal public policy, community ops and security teams to try to spot and disrupt “actors that seek to misuse our platforms and target our users” — while also working to support “those using our platforms to foster peace-building and enable transitional justice”.

So you have to wonder how, for example, Holocaust denial continuing to be being protected speech on Facebook will square with that stated mission for the human rights policy director.

At the same time, Facebook is currently hiring for a public policy manager in Francophone, Africa — who it writes can “combine a passion for technology’s potential to create opportunity and to make Africa more open and connected, with deep knowledge of the political and regulatory dynamics across key Francophone countries in Africa”.

That job ad does not explicitly reference human rights — talking only about “interesting public policy challenges… including privacy, safety and security, freedom of expression, Internet shutdowns, the impact of the Internet on economic growth, and new opportunities for democratic engagement”.

As well as “new opportunities for democratic engagement”, among the role’s other listed responsibilities is working with Facebook’s Politics & Government team to “promote the use of Facebook as a platform for citizen and voter engagement to policymakers and NGOs and other political influencers”.

So here, in a second policy job, Facebook looks to be continuing its ‘business as usual’ strategy of pushing for more political activity to take place on Facebook.

And if Facebook wants an accelerated understanding of human rights issues around the world it might be better advised to take a more joined up approach to human rights across its own policy staff board, and at least include it among the listed responsibilities of all the policy shapers it’s looking to hire.


Fri, 14 Sep 2018 21:48:20 +0000
Facebook is hiring a director of human rights policy to work on “conflict prevention” and “peace-building”
Facebook is advertising for a human rights policy director to join its business, located either at its Menlo Park HQ or in Washington DC — with “conflict prevention” and “peace-building” among the listed responsibilities. In the job ad, Facebook writes that as the reach and impact of its various products continues to grow “so does […]

Bon anniversaire, Let’s Encrypt!

The free-to-use nonprofit was founded in 2014 in part by the Electronic Frontier Foundation and is backed by Akamai, Google, Facebook, Mozilla and more. Three years ago Friday, it issued its first certificate.

Since then, the numbers have exploded. To date, more than 380 million certificates have been issued on 129 million unique domains. That also makes it the largest certificate issuer in the world, by far.

Now, 75 percent of all Firefox traffic is HTTPS, according to public Firefox data — in part thanks to Let’s Encrypt. That’s a massive increase from when it was founded, where only 38 percent of website page loads were served over an HTTPS encrypted connection.

“Change at that speed and scale is incredible,” a spokesperson told TechCrunch. “Let’s Encrypt isn’t solely responsible for this change, but we certainly catalyzed it.”

HTTPS is what keeps the pipes of the web secure. Every time your browser lights up in green or flashes a padlock, it’s a TLS certificate encrypting the connection between your computer and the website, ensuring nobody can intercept and steal your data or modify the website.

But for years, the certificate market was broken, expensive and difficult to navigate. In an effort to “encrypt the web,” the EFF and others banded together to bring free TLS certificates to the masses.

That means bloggers, single-page websites and startups alike can get an easy-to-install certificate for free — even news sites like TechCrunch rely on Let’s Encrypt for a secure connection. Security experts and encryption advocates Scott Helme and Troy Hunt last month found that more than half of the top million websites by traffic are on HTTPS.

And as it’s grown, the certificate issuer has become trusted by the major players — including Apple, Google, Microsoft, Oracle and more.

A fully encrypted web is still a ways off. But with close to a million Let’s Encrypt certificates issued each day, it looks more within reach than ever.


Fri, 14 Sep 2018 15:00:10 +0000
Three years later, Let’s Encrypt has issued over 380 million HTTPS certificates
Bon anniversaire, Let’s Encrypt! The free-to-use nonprofit was founded in 2014 in part by the Electronic Frontier Foundation and is backed by Akamai, Google, Facebook, Mozilla and more. Three years ago Friday, it issued its first certificate. Since then, the numbers have exploded. To date, more than 380 million certificates have been issued on 129 million unique […]

TechCrunch is heading to UCLA on October 18 and we’ve assembled some of the AR/VR industry’s most prescient founders, investors and executives to chat about the startups and trends driving virtual and augmented reality in 2018.

The world’s top tech companies have heavily invested in AR/VR and are persistent in broadcasting the technologies’ potential to blur the lines of how consumers interact with the digital world. Beyond the tech titans, it’s the small startups that are dialing into what’s missing in the ecosystem right now. Our agenda showcases some of the powerhouses in the space, but also plenty of smaller teams that are building and debunking fundamental technologies for virtual worlds.

We still have a few tricks up our sleeves and will be adding some new names to the agenda over the next month so keep your eyes open. In the meantime, check out these agenda highlights:


TechCrunch Sessions: AR/VR
UCLA, Los Angeles // October 18
See full agenda here


Early Bird tickets are still on sale for one more week. Buy your early bird tickets today for just $99 and you’ll save $100 before prices go up. Student tickets are just $45. Book your tickets here.
.


Thu, 13 Sep 2018 22:00:19 +0000
Announcing the agenda for TC Sessions: AR/VR in LA on October 18
TechCrunch is heading to UCLA on October 18 and we’ve assembled some of the AR/VR industry’s most prescient founders, investors and executives to chat about the startups and trends driving virtual and augmented reality in 2018. The world’s top tech companies have heavily invested in AR/VR and are persistent in broadcasting the technologies’ potential to blur […]

A nasty legal battle is set to play out between two former Uber executives.

Eric Alexander, the ride-hailing company’s former president of business in Asia-Pacific, has filed suit against former Uber PR chief Rachel Whetstone .

Alexander blames Whetstone for his firing from Uber in June 2017, claiming her “grossly misleading statements” both internally at Uber and to the media, “destroyed his reputation.” He claims she “harbored deep seated personal animosity” against him, was jealous of his close relationship with then-CEO Travis Kalanick and frequently made racist comments about several minority groups during her tenure.

Update: A spokesperson for Whetstone told TechCrunch the “claims about statements made by Ms. Whetstone regarding underrepresented groups are totally and utterly false.”

Whetstone, well-known in Silicon Valley for her comms prowess, also left Uber in 2017 and has since gone on to lead PR efforts at Facebook and now Netflix.

We’ve reached out to Alexander and Uber for comment.

Backstory

Last year, Alexander was very publicly ousted from Uber after obtaining the medical records of a female passenger who was raped by an Uber driver in India. Alexander had reportedly been investigating the case himself because he believed the Indian ride-hailing business Ola was behind the incident and that the competitor was trying to damage Uber’s reputation in India.

Alexander spent just over three years at the company and was a close confidant of Kalanick’s.

The allegations

The allegations outlined in the lawsuit, first reported by Business Insider, don’t seem to be connected, but rather are an attempt by Alexander to portray Whetstone as a vicious, jealous and racist former colleague out for his career:

Ms. Whetstone harbored deep seated personal animosity against Mr. Alexander over his perceived higher status within Uber, as well as Mr. Alexander’s repeated efforts to curtail Ms. Whetstone’s ongoing racist comments (culminating in Mr. Alexander’s public rebuke of Ms. Whetstone in front of another Uber officer. Given the contentious relationship between the parties, upon her severance from Uber, Ms. Whetstone took the unusual step of insisting on a reciprocal non-disparagement clause that specifically referenced Mr. Alexander by name. Ms. Whetstone thereafter proceeded to violate that clause by spreading false and misleading and/or disparaging information about Mr. Alexander’s response to the rape in India. Ms. Whetstone’s derogatory statements were made in direct violation of the non-disparagement.

The lawsuit provides several examples of racist comments allegedly made by Whetstone, including that “the Chinese cannot be trusted.”

Alexander says Whetstone also went to reporters — Bloomberg’s Eric Newcomer and Recode’s Kara Swisher were named specifically — and told them “false and misleading information.”

Uber’s past catches up to it

The lawsuit, for the most part, looks to be an attempt on Alexander’s end to clear his name. According to his LinkedIn, he hasn’t pursued any new opportunities since his well-publicized exit from Uber, and that’s likely not for lack of trying.

As for Uber, despite replacing its CEO and several other top-level employees following its no good, very bad year in 2017, the company hasn’t been able to shake its scandal-ridden reputation. The mistakes made under Kalanick’s reign have and will continue to catch up to it. And nothing, not even a rebrand, can stop that.

Here’s a full look at the lawsuit.


Thu, 13 Sep 2018 17:29:59 +0000
Former Uber exec alleges ex-PR chief ‘destroyed his reputation’
A nasty legal battle is set to play out between two former Uber executives. Eric Alexander, the ride-hailing company’s former president of business in Asia-Pacific, has filed suit against former Uber PR chief Rachel Whetstone . Alexander blames Whetstone for his firing from Uber in June 2017, claiming her “grossly misleading statements” both internally at Uber and […]

Facebook has quietly built and deployed an artificial intelligence programming tool called SapFix that scans code, automatically identifies bugs, tests different patches and suggests the best ones that engineers can choose to implement. Revealed today at Facebook’s @Scale engineering conference, SapFix is already running on Facebook’s massive code base and the company plans to eventually share it with the developer community.

“To our knowledge, this marks the first time that a machine-generated fix — with automated end-to-end testing and repair — has been deployed into a codebase of Facebook’s scale,” writes Facebook’s developer tool team. “It’s an important milestone for AI hybrids and offers further evidence that search-based software engineering can reduce friction in software development.” SapFix can run with or without Sapienz, Facebook’s previous automated bug spotter. It uses it in conjunction with SapFix, suggesting solutions to problems Sapienz discovers.

These types of tools could allow smaller teams to build more powerful products, or let big corporations save a ton on wasted engineering time. That’s critical for Facebook as it has so many other problems to worry about.

 

Glow AI hardware partners

Meanwhile, Facebook is pressing forward with its strategy of reorienting the computing hardware ecosystem around its own machine learning software. Today it announced that its Glow compiler for machine learning hardware acceleration has signed up the top silicon manufacturers, like Cadence, Esperanto, Intel, Marvell, and Qualcomm, to support Glow. The plan mirrors Facebook’s Open Compute Project for open sourcing server designs and Telecom Infra Project for connectivity technology.

Glow works with a wide array of machine learning frameworks and hardware accelerators to speed up how they perform deep learning processes. It was open sourced earlier this year at Facebook’s F8 conference.

“Hardware accelerators are specialized to solve the task of machine learning execution. They typically contain a large number of execution units, on-chip memory banks, and application-specific circuits that make the execution of ML workloads very efficient,” Facebook’s team writes. “To execute machine learning programs on specialized hardware, compilers are used to orchestrate the different parts and make them work together . . . Hardware partners that use Glow can reduce the time it takes to bring their product to market.”

Facebook VP of infrastructure Jason Taylor

Essentially, Facebook needs help in the silicon department. Instead of isolating itself and building its own chips like Apple and Google, it’s effectively outsourcing the hardware development to the experts. That means it might forego a competitive advantage from this infrastructure, but it also allows it to save money and focus on its core strengths.

“What I talked about today was the difficulty of predicting what chip will really do well in the market. When you build a piece of silicon, you’re making predictions about where the market is going to be in two years” Facebook’s VP of infrastructure Jason Taylor tells me. “The big question is if the workload that they design for is the worlflow that’s really important at the time. You’re going to see this fragmentation. At Facebook, wew want to work with all the partners out there so we have good options now and over the next several years.” Essentially, by partnering with all the chip makers instead of building its own, Facebook future-proofs its software against volatility in which chip becomes the standard.

The technologies aside, the Scale conference was evidence that Facebook will keep hacking, policy scandals be damned. There was nary a mention of Cambridge Analytica or election interference as a packed room of engineers chuckled to nerdy jokes during keynotes packed with enough coding jargon to make the unindoctrinated assume it was in another language. If Facebook is burning, you couldn’t tell from here\


Thu, 13 Sep 2018 14:00:16 +0000
Facebook’s new ‘SapFix’ AI automatically debugs your code
Facebook has quietly built and deployed an artificial intelligence programming tool called SapFix that scans code, automatically identifies bugs, tests different patches and suggests the best ones that engineers can choose to implement. Revealed today at Facebook’s @Scale engineering conference, SapFix is already running on Facebook’s massive code base and the company plans to eventually […]

Sometimes fake news lives inside of Facebook as photos and videos designed to propel misinformation campaigns, instead of off-site on news articles that can generate their own ad revenue. To combat these politically rather than financially motivated meddlers, Facebook has to be able to detect fake news inside of images and the audio that accompanies video clips. Today its expanding its photo and video fact checking program from four countries to all 23 of its fact-checking partners in 17 countries.

“Many of our third-party fact-checking partners have expertise evaluating photos and videos and are trained in visual verification techniques, such as reverse image searching and analyzing image metadata, like when and where the photo or video was taken” says Facebook product manager Antonia Woodford. “As we get more ratings from fact-checkers on photos and videos, we will be able to improve the accuracy of our machine learning model.”

The goal is for Facebook to be able to automatically spot manipulated images, out of context images that don’t show what they say they do, or text and audio claims that are provably false.

In last night’s epic 3,260-word security manifesto, Facebook CEO Mark Zuckerberg explained that “The definition of success is that we stop cyberattacks and coordinated information operations before they can cause harm.” That means using AI to proactively hunt down false news rather than waiting for it to be flagged by users. For that, Facebook needs AI training data that will be produced as exhaust from its partners’ photo and video fact checking operations.

Facebook is developing technology tools to assist its fact checkers in this process. “we use optical character recognition (OCR) to extract text from photos and compare that text to headlines from fact-checkers’ articles. We are also working on new ways to detect if a photo or video has been manipulated” Woodford notes, referring to DeepFakes that use AI video editing software to make someone appear to say or do something they haven’t.

Image memes were one of the most popular forms of disinformation used by the Russian IRA election interferers. The problem is that since they’re so easily re-shareable and don’t require people to leave Facebook to view them, they can get viral distribution from unsuspecting users who don’t realize they’ve become pawns in a disinformation campaign.

Facebook could potentially use the high level of technical resources necessary to build fake news meme-spotting AI as an argument for why Facebook shouldn’t be broken up. With Facebook, Messenger, Instagram, and WhatsApp combined, the company gains economies of scale when it comes to fighting the misinformation scourge.


Thu, 13 Sep 2018 08:02:04 +0000
Facebook rolls out photo/video fact checking so partners can train its AI
Sometimes fake news lives inside of Facebook as photos and videos designed to propel misinformation campaigns, instead of off-site on news articles that can generate their own ad revenue. To combat these politically rather than financially motivated meddlers, Facebook has to be able to detect fake news inside of images and the audio that accompanies […]

Mark Zuckerberg wants you to know he’s trying his damnedest to fix Facebook before it breaks democracy. Tonight he posted a 3,260-word battle plan for fighting election interference. Amidst drilling through Facebook’s strategy and progress, he slips in several notable passages revealing his own philosophy.

Zuckerberg has cast off his premature skepticism and is ready to command the troops. He sees Facebook’s real identity policy as a powerful weapon for truth other social networks lack, but that would be weakened if Instagram and WhatsApp were split off by regulators. He’s done with the finger-pointing and wants everyone to work together on solutions. And he’s adopted a touch of cynicism that could open his eyes and help him predict how people will misuse his creation.

Here are the most important parts of Zuckerberg’s security manifesto:

Zuckerberg embraces his war-time tactician role

“While we want to move quickly when we identify a threat, it’s also important to wait until we uncover as much of the network as we can before we take accounts down to avoid tipping off our adversaries, who would otherwise take extra steps to cover their remaining tracks. And ideally, we time these takedowns to cause the maximum disruption to their operations.”

The fury he unleashed on Google+, Snapchat, and Facebook’s IPO-killer is now aimed at election attackers

“These are incredibly complex and important problems, and this has been an intense year. I am bringing the same focus and rigor to addressing these issues that I’ve brought to previous product challenges like shifting our services to mobile.”

Balancing free speech and security is complicated and expensive

“These issues are even harder because people don’t agree on what a good outcome looks like, or what tradeoffs are acceptable to make. When it comes to free expression, thoughtful people come to different conclusions about the right balances. When it comes to implementing a solution, certainly some investors disagree with my approach to invest so much in security.”

Putting Twitter and YouTube on blast for allowing pseudonymity…

“One advantage Facebook has is that we have a principle that you must use your real identity. This means we have a clear notion of what’s an authentic account. This is harder with services like Instagram, WhatsApp, Twitter, YouTube, iMessage, or any other service where you don’t need to provide your real identity.”

…While making an argument for why the Internet is more secure if Facebook isn’t broken up

“Fortunately, our systems are shared, so when we find bad actors on Facebook, we can also remove accounts linked to them on Instagram and WhatsApp as well. And where we can share information with other companies, we can also help them remove fake accounts too.”‘

Political ads aren’t a business, they’re supposedly a moral duty

“When deciding on this policy, we also discussed whether it would be better to ban political ads altogether. Initially, this seemed simple and attractive. But we decided against it — not due to money, as this new verification process is costly and so we no longer make any meaningful profit on political ads — but because we believe in giving people a voice. We didn’t want to take away an important tool many groups use to engage in the political process.”

Zuckerberg overruled staff to allow academic research on Facebook

“As a result of these controversies [like Cambridge Analytica], there was considerable concern amongst Facebook employees about allowing researchers to access data. Ultimately, I decided that the benefits of enabling this kind of academic research outweigh the risks. But we are dedicating significant resources to ensuring this research is conducted in a way that respects people’s privacy and meets the highest ethical standards.”

Calling on law enforcement to step up

“There are certain critical signals that only law enforcement has access to, like money flows. For example, our systems make it significantly harder to set up fake accounts or buy political ads from outside the country. But it would still be very difficult without additional intelligence for Facebook or others to figure out if a foreign adversary had set up a company in the US, wired money to it, and then registered an authentic account on our services and bought ads from the US.”

Instead of minimizing their own blame, the major players must unite forces

“Preventing election interference is bigger than any single organization. It’s now clear that everyone — governments, tech companies, and independent experts such as the Atlantic Council — need to do a better job sharing the signals and information they have to prevent abuse . . . The last point I’ll make is that we’re all in this together. The definition of success is that we stop cyberattacks and coordinated information operations before they can cause harm.”

The end of Zuckerberg’s utopic idealism

“One of the important lessons I’ve learned is that when you build services that connect billions of people across countries and cultures, you’re going to see all of the good humanity is capable of, and you’re also going to see people try to abuse those services in every way possible.”


Wed, 12 Sep 2018 12:30:28 +0000
10 critical points from Zuckerberg’s epic security manifesto
Mark Zuckerberg wants you to know he’s trying his damnedest to fix Facebook before it breaks democracy. Tonight he posted a 3,260-word battle plan for fighting election interference. Amidst drilling through Facebook’s strategy and progress, he slips in several notable passages revealing his own philosophy. Zuckerberg has cast off his premature skepticism and is ready […]

Women in tech are not only significantly under-funded by venture capitalists, but they also often lack access to the early-stage support granted to their male counterparts.

To enroll in a startup accelerator like Y Combinator, for example, it’s expected founders relocate to the Bay Area for three months. Women, who are more often caregivers, might not be able to do that, and even if they can, the program may not cater to their specific needs.

Female Founders Alliance (FFA), a relatively new network of female startup founders, has built a free, non-dilutive five-week accelerator for women by women. Called Ready, Set, Raise, its goal is to help more female-founded startups raise VC through workshops, 1-on-1 coaching, legal clinics, communications and speech coaching and more. The accelerator, sponsored by Trilogy Equity Partners, kicked off at the end of August and will culminate with a private demo day with VCs in Seattle on September 27th. 

“I don’t know many women who can uproot their families for three months to go live in another city,” FFA founder Leslie Feinzaig told TechCrunch. “When I was working on my company, I wanted to apply to Y Combinator but I was a new mom, it was 100 percent a non-starter.”

Feinzaig knows the trials and tribulations of raising VC as a female entrepreneur all too well. As the founder of an edtech startup called Venture Kits, she tried, unsuccessfully, to procure venture backing. That struggle is why she started FFA, which began as a Facebook group to connect female founders in the Seattle area but has expanded across North America.

The accelerator is designed to allow founders to tune into the programming remotely. Participants are only required to be on-site in Seattle, where FFA is based, for one week, during which the organization is providing free childcare.

FFA’s accelerator is among a new class of efforts created for women in tech. All Raise’s Founders For Change initiative, for example, and new female-focused funds, like Sarah Kunst’s Cleo Capital, are all working to close the gender funding gap.

“I know it seems to people like there’s a lot happening around female founders and diverse founders, but in the context of the size and scale of that gender gap, we are barely getting started,” Feinzaig said. “We need all the accelerators. We need hundreds of funds. We are nowhere close to making a real dent in equal leadership.”

Today, FFA is announcing their inaugural class of startups, eight in total. Here’s a closer look at the group:

 


Wed, 12 Sep 2018 11:00:11 +0000
Ready, Set, Raise is a new accelerator built for women by women
Women in tech are not only significantly under-funded by venture capitalists, but they also often lack access to the early-stage support granted to their male counterparts. To enroll in a startup accelerator like Y Combinator, for example, it’s expected founders relocate to the Bay Area for three months. Women, who are more often caregivers, might […]

Helping businesses bring more firepower to the fight against AI-fuelled disruptors is the name of the game for Integrate.ai, a Canadian startup that’s announcing a $30M Series A today.

The round is led by Portag3 Ventures . Other VCs include Georgian Partners, Real Ventures, plus other (unnamed) individual investors also participating. The funding will be used for a big push in the U.S. market.

Integrate.ai’s early focus has been on retail banking, retail and telcos, says founder Steve Irvine, along with some startups which have data but aren’t necessarily awash with AI expertise to throw at it. (Not least because tech giants continue to hoover up talent.)

Its SaaS platform targets consumer-centric businesses — offering to plug paying customers into a range of AI technologies and techniques to optimize their decision-making so they can respond more savvily to their customers. Aka turning “high volume consumer funnels” into “flywheels”, if that’s a mental image that works for you.

In short it’s selling AI pattern spotting insights as a service via a “cloud-based AI intelligence platform” — helping businesses move from “largely rules-based decisioning” to “more machine learning-based decisioning boosted by this trusted signals exchange of data”, as he puts it.

Irvine gives the example of a large insurance aggregator the startup is working with to optimize the distribution of gift cards and incentive discounts to potential customers — with the aim of maximizing conversions.

“Obviously they’ve got a finite amount of budget for those — they need to find a way to be able to best deploy those… And the challenge that they have is they don’t have a lot of information on people as they start through this funnel — and so they have what is a classic ‘cold start’ problem in machine learning. And they have a tough time allocating those resources most effectively.”

“One of the things that we’ve been able to help them with is to, essentially, find the likelihood of those people to be able to convert earlier by being able to bring in some interesting new signal for them,” he continues. “Which allows them to not focus a lot of their revenue or a lot of those incentives on people who either have a low likelihood of conversion or are most likely to convert. And they can direct all of those resources at the people in the middle of the distribution — where that type of a nudge, that discount, might be the difference between them converting or not.”

He says feedback from early customers suggests the approach has boosted profitability by around 30% on average for targeted business areas — so the pitch is businesses are easily seeing the SaaS easily paying for itself. (In the cited case of the insurer, he says they saw a 23% boost in performance — against what he couches as already “a pretty optimized funnel”.)

“We find pretty consistent [results] across a lot of the companies that we’re working with,” he adds. “Most of these decisions today are made by a CRM system or some other more deterministic software system that tends to over attribute people that are already going to convert. So if you can do a better job of understanding people’s behaviour earlier you can do a better job at directing those resources in a way that’s going to drive up conversion.”

The former Facebook marketing exec, who between 2014 and 2017 ran a couple of global marketing partner programs at Facebook and Instagram, left the social network at the start of last year to found the business — raising $9.6M in seed funding in two tranches, according to Crunchbase.

The eighteen-month-old Toronto based AI startup now touts itself as one of the fastest growing companies in Canadian history, with a headcount of around 40 at this point, and a plan to grow staff 3x to 4x over the next 12 months. Irvine is also targeting growing revenue 10x, with the new funding in place — gunning to carve out a leadership position in the North American market.

One key aspect of Integrate.ai’s platform approach means its customers aren’t only being helped to extract more and better intel from their own data holdings, via processes such as structuring the data for AI processing (though Irvine says it’s also doing that).

The idea is they also benefit from the wider network, deriving relevant insights across Integrate.ai’s pooled base of customers — in a way that does not trample over privacy in the process. At least, that’s the claim.

(It’s worth noting Integrate.ai’s network is not a huge one yet, with customers numbering in the “tens” at this point — the platform only launched in alpha around 12 months ago and remains in beta now. Named customers include the likes of Telus, Scotiabank, and Corus.)

So the idea is to offer an alternative route to boost business intelligence vs the “traditional” route of data-sharing by simply expanding databases — because, as Irvine points out, literal data pooling is “coming under fire right now — because it is not in the best interests, necessarily, of consumers; there’s some big privacy concerns; there’s a lot of security risk which we’re seeing show up”.

What exactly is Integrate.ai doing with the data then? Irvine says its Trusted Signals Exchange platform uses some “pretty advanced techniques in deep learning and other areas of machine learning to be able to transfer signals or insights that we can gain from different companies such that all the companies on our platform can benefit by delivering more personalized, relevant experiences”.

“But we don’t need to ever, kind of, connect data in a more traditional way,” he also claims. “Or pull personally identifiable information to be able to enable it. So it becomes very privacy-safe and secure for consumers which we think is really important.”

He further couches the approach as “pretty unique”, adding it “wouldn’t even have been possible probably a couple of years ago”.

From Irvine’s description the approach sounds similar to the data linking (via mathematical modelling) route being pursued by another startup, UK-based InfoSum — which has built a platform that extracts insights from linked customer databases while holding the actual data in separate silos. (And InfoSum, which was founded in 2016, also has a founder with a behind-the-scenes’ view on the inners workings of the social web — in the form of Datasift’s Nic Halstead.)

Facebook’s own custom audiences product, which lets advertisers upload and link their customer databases with the social network’s data holdings for marketing purposes is the likely inspiration behind all these scenes.

Irvine says he spotted the opportunity to build this line of business having been privy to a market overview in his role at Facebook, meeting with scores of companies in his marketing partner role and getting to hear high level concerns about competing with tech giants. He says the Facebook job also afforded him an overview on startup innovation — and there he spied a gap for Integrate.ai to plug in.

“My team was in 22 offices around the world, and all the major tech hubs, and so we got a chance to see any of the interesting startups that were getting traction pretty quickly,” he tells TechCrunch. “That allowed us to see the gaps that existed in the market. And the biggest gap that I saw… was these big consumer enterprises needed a way to use the power of AI and needed access to third party data signals or insights to be able to enabled them to transition to this more customer-centric operating model to have any hope of competing with the large digital disruptors like Amazon.

“That was kind of the push to get me out of Facebook, back from California to Toronto, Canada, to start this company.”

Again on the privacy front, Irvine is a bit coy about going into exact details about the approach. But is unequivocal and emphatic about how ad tech players are stepping over the line — having seen into that pandora’s box for years — so his rational to want to do things differently at least looks clear.

“A lot of the techniques that we’re using are in the field of deep learning and transfer learning,” he says. “If you think about the ultimate consumer of this data-sharing, that is insight sharing, it is at the end these AI systems or models. Meaning that it doesn’t need to be legible to people as an output — all we’re really trying to do is increase the map; make a better probabilistic decision in these circumstances where we might have little data or not the right data that we need to be able to make the right decision. So we’re applying some of the newer techniques in those areas to be able to essentially kind of abstract away from some of the more sensitive areas, create representations of people and patterns that we see between businesses and individuals, and then use that as a way to deliver a more personalized predictions — without ever having to know the individual’s personally identifiable information.”

“We do do some work with differential privacy,” he adds when pressed further on the specific techniques being used. “There’s some other areas that are just a little bit more sensitive in terms of the work that we’re doing — but a lot of work around representative learning and transfer learning.”

Integrate.ai has published a whitepaper — for a framework to “operationalize ethics in machine learning systems” — and Irvine says it’s been called in to meet and “share perspectives” with regulators based on that.

“I think we’re very GDPR-friendly based on the way that we have thought through and constructed the platform,” he also says when asked whether the approach would be compliant with the European Union’s tough new privacy framework (which also places some restrictions on entirely automated decisions when they could have a significant impact on individuals).

“I think you’ll see GDPR and other regulations like that push more towards these type of privacy preserving platforms,” he adds. “And hopefully away from a lot of the really creepy, weird stuff that is happening out there with consumer data that I think we all hope gets eradicated.”

For the record, Irvine denies any suggestion that he was thinking of his old employer when he referred to “creepy, weird stuff” done with people’s data — saying: “No, no, no!”

“What I did observe when I was there in ad tech in general, I think if you look at that landscape, I think there are many, many… worse examples of what is happening out there with data than I think the ones that we’re seeing covered in the press. And I think as the light shines on more of that ecosystem of players, I think we will start to see that the ways they’ve thought about data, about collection, permissioning, usage, I think will change drastically,” he adds.

“And the technology is there to be able to do it in a much more effective way without having to compromise results in too big a way. And I really hope that that sea change has already started — and I hope that it continues at a much more rapid pace than we’ve seen.”

But while privacy concerns might be reduced by the use of an alternative to traditional data-pooling, depending on the exact techniques being used, additional ethical considerations are clearly being dialled sharply into view if companies are seeking to supercharge their profits by automating decision making in sensitive and impactful areas such as discounts (meaning some users stand to gain more than others).

The point is an AI system that’s expert at spotting the lowest hanging fruit (in conversion terms) could start selectively distributing discounts to a narrow sub-section of users only — meaning other people might never even be offered discounts.

In short, it risks the platform creating unfair and/or biased outcomes.

Integrate.ai has recognized the ethical pitfalls, and appears to be trying to get ahead of them — hence its aforementioned ‘Responsible AI in Consumer Enterprise’ whitepaper.

Irvine also says that raising awareness around issues of bias and “ethical AI” — and promoting “more responsible use and implementation” of its platform is another priority over the next twelve months.

“The biggest concern is the unethical treatment of people in a lot of common, day-to-day decisions that companies are going to be making,” he says of problems attached to AI. “And they’re going to do it without understanding, and probably without bad intent, but the reality is the results will be the same — which is perpetuating a lot of biases and stereotypes of the past. Which would be really unfortunate.

“So hopefully we can continue to carve out a name, on that front, and shift the industry more to practices that we think are consistent with the world that we want to live in vs the one we might get stuck in.”

The whitepaper was produced by a dedicated internal team, which he says focuses on AI ethics and fairness issues, and is headed up by VP of product & strategy, Kathryn Hume.

“We’re doing a lot of research now with the Vector Institute for AI… on fairness in our AI models, because what we’ve seen so far is that — if left unattended, if all we did was run these models and not adjust for some of the ethical considerations — we would just perpetuate biases that we’ve seen in the historical data,” he adds.

“We would pick up patterns that are more commonly associated with maybe reinforcing particular stereotypes… so we’re putting a really dedicated effort — probably abnormally large, given our size and stage — towards leading in this space, and making sure that that’s not the outcome that gets delivered through effective use of a platform like ours. But actually, hopefully, the total opposite: You have a better understanding of where those biases might creep in and they could be adjusted for in the models.”

Combating unfairness in this type of AI tool would mean a company having to optimize conversion performance a bit less than it otherwise could.

Though Irvine suggests that’s likely just in the short term. Over the longer term he argues you’re laying the foundations for greater growth — because you’re building a more inclusive business, saying: “We have this conversational a lot. “I think it’s good for business, it’s just the time horizon that you might think about.”

“We’ve got this window of time right now, that I think is a really precious window, where people are moving over from more deterministic software systems to these more probabilistic, AI-first platforms… They just operate much more effectively, and they learn much more effectively, so there will be a boost in performance no matter what. If we can get them moved over right off the bat onto a platform like ours that has more of an ethical safeguard, then they won’t notice a drop off in performance — because it’ll actually be better performance. Even if it’s not optimized fully for short term profitability,” he adds.

“And we think, over the long term it’s just better business if you’re socially conscious, ethical company. We think, over time, especially this new generation of consumers, they start to look out for those things more… So we really hope that we’re on the right side of this.”

He also suggests that the wider visibility afforded by having AI doing the probabilistic pattern spotting (vs just using a set of rules) could even help companies identify unfairnesses they don’t even realize might be holding their businesses back.

“We talk a lot about this concept of mutual lifetime value — which is how do we start to pull in the signals that show that people are getting value in being treated well, and can we use those signals as part of the optimization. And maybe you don’t have all the signal you need on that front, and that’s where being able to access a broader pool can actually start to highlight those biases more.”


Tue, 11 Sep 2018 16:04:41 +0000
Integrate.ai pulls in $30M to help businesses make better customer-centric decisions
Helping businesses bring more firepower to the fight against AI-fuelled disruptors is the name of the game for Integrate.ai, a Canadian startup that’s announcing a $30M Series A today. The round is led by Portag3 Ventures . Other VCs include Georgian Partners, Real Ventures, plus other (unnamed) individual investors also participating. The funding will be […]

Memes are the language of the web and Facebook wants to better understand them.

Facebook’s AI teams have made substantial advances over the years in both computer vision and natural language recognition. Today, they’ve announced some of their latest work that works to combine advances in the two fields. A new system, codenamed “Rosetta,” helps teams at Facebook and Instagram identify text within images to better understand what their subject is and more easily classify them for search or to flag abusive content.

It’s not all memes; the tool scans over a billion images and video frames daily across multiple languages in real time, according to a company blog post.

Rosetta makes use of recent advances in optical character recognition (OCR) to first scan an image and detect text that is present, at which point the characters are placed inside a bounding box that is then analyzed by convolutional neural nets that try to recognize the characters and determine what’s being communicated.

via Facebook

This technology has been in practice for a while — Facebook has been working with OCR since 2015 — but implementing this across the company’s vast networks provides a crazy degree of scale that motivated the company to develop some new strategies around character detection and recognition.

If you’re interested in some of the more technical details of what they did here, check out the team’s research paper on the topic.

Facebook has plenty of reasons to be interested in the text that is accompanying videos or photos, particularly when it comes to their content moderation needs.

Identifying spam is pretty straightforward when the text description of a photo is “Bruh!!! 
<DIV class=urlfraz><hr></DIV>
<font color=#666666 size=1>Mon, 10 Sep 2018 17:00:33 +0000</font><br>
<a target=_blank href=Facebook’s ‘Rosetta’ system helps the company understand memes
Memes are the language of the web and Facebook wants to better understand them. Facebook’s AI teams have made substantial advances over the years in both computer vision and natural language recognition. Today, they’ve announced some of their latest work that works to combine advances in the two fields. A new system, codenamed “Rosetta,” helps […]

Facebook Lite, the social network’s product for people in areas with low connectivity or limited internet, is making Community Help available to people in more than 100 countries. Facebook Lite uses less data, and installs and loads faster than the standard Facebook app. Facebook Lite also works on lower-end devices and slower internet networks.

Facebook first launched Community Help last February to help people find and give help in the areas of food, shelter and transportation in the aftermath of natural disasters and building fires — two types of crises in which Safety Check would likely be activated.

“Our priority is to build tools that provide people with ways to get the help they need to recover and rebuild after a crisis,” Facebook Crisis Response Product Manager Jeong-Suh Choi said in a release. “By making Community Help available on Facebook Lite, event more people can get the help they need in times of crisis.”

This past February, Facebook beefed up its efforts to enable companies like Lyft, Chase, International Medical Corps and Save the Children to provide similar services to people in crisis.


Sat, 08 Sep 2018 20:33:25 +0000
Facebook Lite adds additional crisis response tools
Facebook Lite, the social network’s product for people in areas with low connectivity or limited internet, is making Community Help available to people in more than 100 countries. Facebook Lite uses less data, and installs and loads faster than the standard Facebook app. Facebook Lite also works on lower-end devices and slower internet networks. Facebook […]

Half an hour into their two-hour testimony on Wednesday before the Senate Intelligence Committee, Facebook COO Sheryl Sandberg and Twitter CEO Jack Dorsey were asked about collaboration between social media companies. “Our collaboration has greatly increased,” Sandberg stated before turning to Dorsey and adding that Facebook has “always shared information with other companies.” Dorsey nodded in response, and noted for his part that he’s very open to establishing “a regular cadence with our industry peers.”

Social media companies have established extensive policies on what constitutes “hate speech” on their platforms. But discrepancies between these policies open the possibility for propagators of hate to game the platforms and still get their vitriol out to a large audience. Collaboration of the kind Sandberg and Dorsey discussed can lead to a more consistent approach to hate speech that will prevent the gaming of platforms’ policies.

But collaboration between competitors as dominant as Facebook and Twitter are in social media poses an important question: would antitrust or other laws make their coordination illegal?

The short answer is no. Facebook and Twitter are private companies that get to decide what user content stays and what gets deleted off of their platforms. When users sign up for these free services, they agree to abide by their terms. Neither company is under a First Amendment obligation to keep speech up. Nor can it be said that collaboration on platform safety policies amounts to collusion.

This could change based on an investigation into speech policing on social media platforms being considered by the Justice Department. But it’s extremely unlikely that Congress would end up regulating what platforms delete or keep online – not least because it may violate the First Amendment rights of the platforms themselves.

What is hate speech anyway?

Trying to find a universal definition for hate speech would be a fool’s errand, but in the context of private companies hosting user generated content, hate speech for social platforms is what they say is hate speech.

Facebook’s 26-page Community Standards include a whole section on how Facebook defines hate speech. For Facebook, hate speech is “anything that directly attacks people based on . . . their ‘protected characteristics’ — race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity, or serious disability or disease.” While that might be vague, Facebook then goes on to give specific examples of what would and wouldn’t amount to hate speech, all while making clear that there are cases – depending on the context – where speech will still be tolerated if, for example, it’s intended to raise awareness.

Twitter uses a “hateful conduct” prohibition which they define as promoting “violence against or directly attacking or threatening other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease.” They also prohibit hateful imagery and display names, meaning it’s not just what you tweet but what you also display on your profile page that can count against you.

Both companies constantly reiterate and supplement their definitions, as new test cases arise and as words take on new meaning. For example, the two common slang words to describe Ukrainians by Russians and Russians by Ukrainians was determined to be hate speech after war erupted in Eastern Ukraine in 2014. An internal review by Facebook found that what used to be common slang had turned into derogatory, hateful language.

Would collaboration on hate speech amount to anticompetitive collusion?

Under U.S. antitrust laws, companies cannot collude to make anticompetitive agreements or try to monopolize a market. A company which becomes a monopoly by having a superior product in the marketplace doesn’t violate antitrust laws. What does violate the law is dominant companies making an agreement – usually in secret – to deceive or mislead competitors or consumers. Examples include price fixing, restricting new market entrants, or misrepresenting the independence of the relationship between competitors.

A Pew survey found that 68% of Americans use Facebook. According to Facebook’s own records, the platform had a whopping 1.47 billion daily active users on average for the month of June and 2.23 billion monthly active users as of the end of June – with over 200 million in the US alone. While Twitter doesn’t disclose its number of daily users, it does publish the number of monthly active users which stood at 330 million at last count, 69 million of which are in the U.S.

There can be no question that Facebook and Twitter are overwhelmingly dominant in the social media market. That kind of dominance has led to calls for breaking up these giants under antitrust laws.

Would those calls hold more credence if the two social giants began coordinating their policies on hate speech?

The answer is probably not, but it does depend on exactly how they coordinated. Social media companies like Facebook, Twitter, and Snapchat have grown large internal product policy teams that decide the rules for using their platforms, including on hate speech. If these teams were to get together behind closed doors and coordinate policies and enforcement in a way that would preclude smaller competitors from being able to enter the market, then antitrust regulators may get involved.

Antitrust would also come into play if, for example, Facebook and Twitter got together and decided to charge twice as much for advertising that includes hate speech (an obviously absurd scenario) – in other words, using their market power to affect pricing of certain types of speech that advertisers use.

In fact, coordination around hate speech may reduce anti-competitive concerns. Given the high user engagement around hate speech, banning it could lead to reduced profits for the two companies and provide an opening to upstart competitors.

Sandberg and Dorsey’s testimony Wednesday didn’t point to executives hell-bent on keeping competition out through collaboration. Rather, their potential collaboration is probably better seen as an industry deciding on “best practices,” a common occurrence in other industries including those with dominant market players.

What about the First Amendment?

Private companies are not subject to the First Amendment. The Constitution applies to the government, not to corporations. A private company, no matter its size, can ignore your right to free speech.

That’s why Facebook and Twitter already can and do delete posts that contravene their policies. Calling for the extermination of all immigrants, referring to Africans as coming from shithole countries, and even anti-gay protests at military funerals may be protected in public spaces, but social media companies get to decide whether they’ll allow any of that on their platforms. As Harvard Law School’s Noah Feldman has stated, “There’s no right to free speech on Twitter. The only rule is that Twitter Inc. gets to decide who speaks and listens–which is its right under the First Amendment.”

Instead, when it comes to social media and the First Amendment, courts have been more focused on not allowing the government to keep citizens off of social media. Just last year, the U.S. Supreme Court struck down a North Carolina law that made it a crime for a registered sex offender to access social media if children use that platform. During the hearing, judges asked the government probing questions about the rights of citizens to free speech on social media from Facebook, to Snapchat, to Twitter and even LinkedIn.

Justice Ruth Bader Ginsburg made clear during the hearing that restricting access to social media would mean “being cut off from a very large part of the marketplace of ideas [a]nd [that] the First Amendment includes not only the right to speak, but the right to receive information.”

The Court ended up deciding that the law violated the fundamental First Amendment principle that “all persons have access to places where they can speak and listen,” noting that social media has become one of the most important forums for expression of our day.

Lower courts have also ruled that public officials who block users off their profiles are violating the First Amendment rights of those users. Judge Naomi Reice Buchwald, of the Southern District of New York, decided in May that Trump’s Twitter feed is a public forum. As a result, she ruled that when Trump blocks citizens from viewing and replying to his posts, he violates their First Amendment rights.

The First Amendment doesn’t mean Facebook and Twitter are under any obligation to keep up whatever you post, but it does mean that the government can’t just ban you from accessing your Facebook or Twitter accounts – and probably can’t block you off of their own public accounts either.

Collaboration is Coming?

Sandberg made clear in her testimony on Wednesday that collaboration is already happening when it comes to keeping bad actors off of platforms. “We [already] get tips from each other. The faster we collaborate, the faster we share these tips with each other, the stronger our collective defenses will be.”

Dorsey for his part stressed that keeping bad actors off of social media “is not something we want to compete on.” Twitter is here “to contribute to a healthy public square, not compete to have the only one, we know that’s the only way our business thrives and helps us all defend against these new threats.”

He even went further. When it comes to the drafting of their policies, beyond collaborating with Facebook, he said he would be open to a public consultation. “We have real openness to this. . . . We have an opportunity to create more transparency with an eye to more accountability but also a more open way of working – a way of working for instance that allows for a review period by the public about how we think about our policies.”

I’ve already argued why tech firms should collaborate on hate speech policies, the question that remains is if that would be legal. The First Amendment does not apply to social media companies. Antitrust laws don’t seem to stand in their way either. And based on how Senator Burr, Chairman of the Senate Select Committee on Intelligence, chose to close the hearing, government seems supportive of social media companies collaborating. Addressing Sandberg and Dorsey, he said, “I would ask both of you. If there are any rules, such as any antitrust, FTC, regulations or guidelines that are obstacles to collaboration between you, I hope you’ll submit for the record where those obstacles are so we can look at the appropriate steps we can take as a committee to open those avenues up.”



Hate speech, collusion, and the constitution
Half an hour into their two-hour testimony on Wednesday before the Senate Intelligence Committee, Facebook COO Sheryl Sandberg and Twitter CEO Jack Dorsey were asked about collaboration between social media companies. “Our collaboration has greatly increased,” Sandberg stated before turning to Dorsey and adding that Facebook has “always shared information with other companies.” Dorsey nodded […]
view page: 401

Over 500 styles with butterfly element, save more with coupon “RGbutterfly”. Over $30, save $4. Over $40, save $6. Over $60, save $9. Over $80, save $12.

Code: RGbutterfly

Start: 18 Apr 2017 | End: 12 Apr 2018

Enjoy 10% OFF with coupon: SHOPTD5, offer valid until 04/30/2018.

Code: SHOPTD5

Start: 29 Oct 2017 | End: 30 Apr 2018

Save 15% off SSL Certificates

Code: SSLAFF123

Start: 02 Oct 2017 | End: 30 Apr 2018

Search All Amazon* UK* DE* FR* JP* CA* CN* IT* ES* IN* BR* MX
Booking.com B.V. is based in Amsterdam in the Netherlands. Ready for some statistics? Our 1,534,024 properties, including 860,482 holiday rentals, are located in 123,105 destinations in 229 countries and territories, and are supported internationally by 198 offices in 70 countries.
2013 Copyright © Techhap.com Mobile version 2015 | PeterLife & company
Skimlinks helps publishers monetize editorial content through automated affiliate links for products. Affiliate programm.
Terms of use Link at is mandatory if site materials are using fully or particulary.
Were treated to the site administrator, a cup of coffee *https://paypal.me/peterlife
Yandex.ru