While Elon Musk and Mark Zuckerberg debate the dangers of artificial general intelligence, startups applying AI to more narrowly defined problems such as accelerating the performance of sales teams and improving the operating efficiency of manufacturing lines are building billion-dollar businesses. Narrowly defining a problem, however, is only the first step to finding valuable business applications of AI.
To find the right opportunity around which to build an AI business, startups must apply the “Goldilocks principle” in several different dimensions to find the sweet spot that is “just right” to begin — not too far in one dimension, not too far in another. Here are some ways for aspiring startup founders to thread the needle with their AI strategy, based on what we’ve learned from working with thousands of AI startups.
Unlike pre-intelligence software, AI responds to the environment in which they operate; algorithms take in data and return an answer or prediction. Depending on the application, that prediction may describe an outcome in the near term, such as tomorrow’s weather, or an outcome many years in the future, such as whether a patient will develop cancer in 20 years. The time horizon of the algorithm’s prediction is critical to its usefulness and to whether it offers an opportunity to build defensibility.
Algorithms making predictions with long time horizons are difficult to evaluate and improve. For example, an algorithm may use the schedule of a contractor’s previous projects to predict that a particular construction project will fall six months behind schedule and go over budget by 20 percent. Until this new project is completed, the algorithm designer and end user can only tell whether the prediction is directionally correct — that is, whether the project is falling behind or costs are higher.
Even when the final project numbers end up very close to the predicted numbers, it will be difficult to complete the feedback loop and positively reinforce the algorithm. Many factors may influence complex systems like a construction project, making it difficult to A/B test the prediction to tease out the input variables from unknown confounding factors. The more complex the system, the longer it may take the algorithm to complete a reinforcement cycle, and the more difficult it becomes to precisely train the algorithm.
While many enterprise customers are open to piloting AI solutions, startups must be able to validate the algorithm’s performance in order to complete the sale. The most convincing way to validate an algorithm is by using the customer’s real-time data, but this approach may be difficult to achieve during a pilot. If the startup does get access to the customer’s data, the prediction time horizon should be short enough that the algorithm can be validated during the pilot period.
For most of AI history, slow computational speeds have severely limited the scope of applied AI.
Historic data, if it’s available, can serve as a stopgap to train an algorithm and temporarily validate it via backtesting. Training an algorithm making long time horizon predictions on historic data is risky because processes and environments are more likely to have changed the further back you dig into historic records, making historic data sets less descriptive of present-day conditions.
In other cases, while the historic data describing outcomes exists for you to train an algorithm, it may not capture the input variable under consideration. In the construction example, that could mean that you found out that sites using blue safety hats are more likely to complete projects on time, but since that hat color wasn’t previously helpful in managing projects, that information wasn’t recorded in the archival records. This data must be captured from scratch, which further delays your time to market.
Instead of making singular “hero” predictions with long time horizons, AI startups should build multiple algorithms making smaller, simpler predictions with short time horizons. Decomposing an environment into simpler subsystems or processes limits the number of inputs, making them easier to control for confounding factors. The BIM 360 Project IQ Team at Autodesk takes this small prediction approach to areas that contribute to construction project delays. Their models predict safety and score vendor and subcontractor quality/reliability, all of which can be measured while a project is ongoing.
Shorter time horizons make it easier for the algorithm engineer to monitor its change in performance and take action to quickly improve it, instead of being limited to backtesting on historic data. The shorter the time horizon, the shorter the algorithm’s feedback loop will be. As each cycle through the feedback incrementally compounds the algorithm’s performance, shorter feedback loops are better for building defensibility.
Most algorithms model dynamic systems and return a prediction for a human to act on. Depending on how quickly the system is changing, the algorithm’s output may not remain valid for very long: the prediction may “decay” before the user can take action. In order to be useful to the end user, the algorithm must be designed to accommodate the limitations of computing and human speed.
In a typical AI-human workflow, the human feeds input data into the algorithm, the algorithm runs calculations on that input data and returns an output that predicts a certain outcome or recommends a course of action; the human interprets that information to decide on a course of action, then takes action. The time it takes the algorithm to compute an answer and the time it takes for a human to act on the output are the two largest bottlenecks in this workflow.
For most of AI history, slow computational speeds have severely limited the scope of applied AI. An algorithm’s prediction depends on the input data, and the input data represents a snapshot in time at the moment it was recorded. If the environment described by the data changes faster than the algorithm can compute the input data, by the time the algorithm completes its computations and returns a prediction, the prediction will only describe a moment in the past and will not be actionable. For example, the algorithm behind the music app Shazam may have needed several hours to identify a song after first “hearing” it using the computational power of a Windows 95 computer.
The rise of cloud computing and the development of hardware specially optimized for AI computations has dramatically broadened the scope of areas where applied AI is actionable and affordable. While macro tech advancements can greatly advance applied AI, the algorithm is not totally held hostage to current limits of computation; reinforcement through training also can improve the algorithm’s response time. The more of the same example an algorithm encounters, the more quickly it can skip computations to arrive at a prediction. Thanks to advances in computation and reinforcement, today Shazam takes less than 15 seconds to identify a song.
Automating the decision and action also could help users make use of predictions that decay too quickly to wait for humans to respond. Opsani is one such company using AI to make decisions that are too numerous and fast-moving for humans to make effectively. Unlike human DevOps, who can only move so fast to optimize performance based on recommendations from an algorithm, Opsani applies AI to both identify and automatically improve operations of applications and cloud infrastructure so its customers can enjoy dramatically better performance.
Not all applications of AI can be completely automated, however, if the perceived risk is too high for end users to accept, or if regulations mandate that humans must approve the decision.
Just like software startups launch when they have built a minimum viable product (MVP) in order to collect actionable feedback from initial customers, AI startups should launch when they reach the minimum algorithmic performance (MAP) required by early adopters, so that the algorithm can be trained on more diverse and fresh data sets and avoid becoming overfit to a training set.
Most applications don’t require 100 percent accuracy to be valuable. For example, a fraud detection algorithm may only immediately catch five percent of fraud cases within 24 hours of when they occur, but human fraud investigators catch 15 percent of fraud cases after a month of analysis. In this case, the MAP is zero, because the fraud detection algorithm could serve as a first filter in order to reduce the number of cases the human investigators must process. The startup can go to market immediately in order to secure access to the large volume of fraud data used for training their algorithm. Over time, the algorithms’ accuracy will improve and reduce the burden on human investigators, freeing them to focus on the most complex cases.
Startups building algorithms for zero or low MAP applications will be able to launch quickly, but may be continuously looking over their shoulder for copycats, if these copycats appear before the algorithm has reached a high level of performance.
There’s no one-size-fits-all approach to moving an algorithm from the research lab to the market.
Startups attacking low MAP problems also should watch out for problems that can be solved with near 100 percent accuracy with a very small training set, where the problem being modeled is relatively simple, with few dimensions to track and few possible variations in outcome.
AI-powered contract processing is a good example of an application where the algorithm’s performance plateaus quickly. There are thousands of contract types, but most of them share key fields: the parties involved, the items of value being exchanged, time frame, etc. Specific document types like mortgage applications or rental agreements are highly standardized in order to comply with regulation. Across multiple startups, we have seen algorithms that automatically process these documents needing only a few hundred examples to train to an acceptable degree of accuracy before additional examples do little to improve the algorithm, making it easy for new entrants to match incumbents and earlier entrants in performance.
AIs built for applications where human labor is inexpensive and able to easily achieve high accuracy may need to reach a higher MAP before they can find an early adopter. Tasks requiring fine motor skills, for example, have yet to be taken over by robots because human performance sets a very high MAP to overcome. When picking up an object, the AIs powering the robotic hand must gauge an object’s stiffness and weight with a high degree of accuracy, otherwise the hand will damage the object being handled. Humans can very accurately gauge these dimensions with almost no training. Startups attacking high MAP problems must invest more time and capital into acquiring enough data to reach MAP and launch.
Narrow AI can demonstrate impressive gains in a wide range of applications — in the research lab. Building a business around a narrow AI application, on the other hand, requires a new playbook. This process is heavily dependent on the specific use case on all dimensions, and the performance of the algorithm is merely one starting point. There’s no one-size-fits-all approach to moving an algorithm from the research lab to the market, but we hope these ideas will provide a useful blueprint for you to begin.
CEO John Lemp recently said that thanks to a new policy, publishers in Revcontent‘s content recommendation network “won’t ever make a cent” on false and misleading stories — at least, not from the network.
To achieve this, the company is relying on fact-checking provided by the Poynter Institute’s International Fact Checking Network. If any two independent fact checkers from International Fact Checking flag a story from the Revcontent network as false, the company’s widget will be removed, and Revcontent will not pay out any money on that story (not even revenue earned before the story was flagged).
In some ways, Revcontent’s approach to fighting fake news and misinformation sounds similar to the big social media companies — Lemp, like Twitter, has said his company cannot be the “arbiter of truth,” and like Facebook, he’s emphasizing the need to remove the financial incentives for posting sensationalistic-but-misleading stories.
However, Lemp (who’s spoken in the past about using content recommendations to reduce publishers’ reliance on individual platforms) criticized the big internet companies for “arbitrarily” taking down content in response to “bad PR.” In contrast, he said Revcontent will have a fully transparent approach, one that removes the financial rewards for fake news without silencing anyone.
Lemp didn’t mention any specific takedowns, but the big story these days is Infowars. It seems like nearly everyone has been cracking down on Alex Jones’ far-right, conspiracy-mongering site, removing at least some Infowars-related accounts and content in the past couple of weeks.
The Infowars story also raises the question of whether you can effectively fight fake news on a story-by-story basis, rather than completely cutting off publishers when they’ve shown themselves to consistently post misleading or falsified stories.
When asked about this, Lemp said Revcontent also has the option to completely removing publishers from the network, but he said he views that as a “last resort.”
The BitFi crypto wallet was supposed to be unhackable and none other than famous weirdo John McAfee claimed that the device – essentially an Android-based mini tablet – would withstand any attack. Spoiler alert: it couldn’t.
First, a bit of background. The $120 device launched at the beginning of this month to much fanfare. It consisted of a device that McAfee claimed contained no software or storage and was instead a standalone wallet similar to the Trezor. The website featured a bold claim by McAfee himself, one that would give a normal security researcher pause:
Further, the company offered a bug bounty that seems to be slowly being eroded by outside forces. They asked hackers to pull coins off of a specially prepared $10 wallet, a move that is uncommon in the world of bug bounties. They wrote:
We deposit coins into a Bitfi wallet
If you wish to participate in the bounty program, you will purchase a Bitfi wallet that is preloaded with coins for just an additional $10 (the reason for the charge is because we need to ensure serious inquiries only)
If you successfully extract the coins and empty the wallet, this would be considered a successful hack
You can then keep the coins and Bitfi will make a payment to you of $250,000
Please note that we grant anyone who participates in this bounty permission to use all possible attack vectors, including our servers, nodes, and our infrastructure
Hackers began attacking the device immediately, eventually hacking it to find the passphrase used to move crypto in and out of the the wallet. In a detailed set of tweets, security researchers Andrew Tierney and Alan Woodward began finding holes by attacking the operating system itself. However, this did not match the bounty to the letter, claimed BitFi, even though they did not actually ship any bounty-ready devices.
Then, to add insult to injury, the company earned a Pwnies award at security conference Defcon. The award was given for worst vendor response. As hackers began dismantling the device, BitFi went on the defensive, consistently claiming that their device was secure. And the hackers had a field day. One hacker, 15-year-old Saleem Rashid, was able to play Doom on the device.
The hacks kept coming. McAfee, for his part, kept refusing to accept the hacks as genuine.
Unfortunately, the latest hack may have just fulfilled all of BitFi’s requirements. Rashid and Tierney have been able to pull cash out of the wallet by hacking the passphrase, a primary requirement for the bounty. “We have sent the seed and phrase from the device to another server, it just gets sent using netcat, nothing fancy.” Tierney said. “We believe all conditions have been met.”
The end state of this crypto mess? BitFi did what most hacked crypto companies do: double down on the threats. In a recently deleted Tweet they made it clear that they were not to be messed with:
The researchers, however, may still have the last laugh.
Lime and Bird are protesting recommendations in Santa Monica, Calif. that would prevent the electric scooter companies from operating in the Southern California city. We first saw the news over on Curbed LA, which reported both Lime and Bird are temporarily halting their services in Santa Monica.
Last week, Santa Monica’s shared mobility device selection committee recommended the city move forward with Lyft and Uber-owned Jump as the two exclusive scooter operators in the city during the upcoming 16-month pilot program. The committee ranked Lyft and Jump highest due to their experience in the transportation space, staffing strategy, commitments to diversity and equity, fleet maintenance strategies and other elements. Similarly, the committee recommended both Lyft and Jump as bike-share providers in the city.
“The Lyft and Uber applications to operate e-scooter sharing programs in Santa Monica demonstrate the desperate lengths CO2 polluting companies will go to for the purpose of undermining clean energy competition,” a Bird spokesperson told TechCrunch. “We at Bird are dedicated to replacing car trips with clean energy trips and will continue to fight against car dependency alongside our loyal riders.”
Now, both Bird and Lime are asking their respective riders to speak out against the recommendations. Bird, which first launched in Santa Monica, has also emailed riders, asking them to tell the city council that they want to Bird to stay.
“In a closed-door meeting, a small city-appointed selection committee decided to recommend banning Bird from your city beginning in September,” Bird wrote in an email to customers. “This group inexplicably scored companies with no experience ever operating shared e-scooters higher than Bird who invented this model right here in Santa Monica.”
Bird goes on to throw shade at Uber and Lyft — neither of which have operated electric scooter services before. That shade is entirely fair, but one could argue both Uber and Lyft already have more experience operating transportation services within cities and would be better equipped to run an electric scooter service than a newer company.
Lime says it’s worked collaboratively with the city to design a program tailored to the needs of the Santa Monica community since day one.
“It’s clear Santa Monica residents and visitors have enthusiastically embraced Lime, with over 180,000 unique riders choosing us as their affordable, zero-emission transportation option since we launched in April,” Lime CEO Toby Sun said in a statement to TechCrunch. “As the most experienced shared bike and scooter company in the United States, we are disappointed by the City’s current proposal because Santa Monica riders deserve access to best-in-class technology. We have on-the-ground experience operating shared scooters in Santa Monica and around the world, giving us the greatest readiness to fulfill the needs of residents without interruption when the pilot program begins.”
In addition to asking people to contact their city officials, Bird and Lime are hosting a rally later today at Santa Monica City hall. But given that most of these electric scooters are manufactured by the same provider and that the services are essentially the same, I’d be surprised if there’s much brand loyalty. Over in San Francisco, I personally miss having electric scooters, but I really don’t give a rat’s pajamas which services receive permits. That’s just to say, we’ll see if these efforts are effective.
It seems Amazon didn’t know what it had on its hands when it released the first Echo in late-2014. The AI-powered speaker formed the foundation of the next been moment in consumer electronics. Those devices have helped mainstream consumer AI and open the door to wide-scale adoption of connected home products.
New numbers from NPD, naturally, don’t show any sign of flagging for the category. According to the firm, the devices are set for a 50-percent dollar growth from between 2016-2017 to 2018-2019. The category is projected to add $1.6 billion through next year.
The Echo line has grown rapidly over the past four years, with Amazon adding the best-selling Dot and screen enabled products like the Spot and Show. Google, meanwhile, has been breathing down the company’s neck with its own Home offerings. The company also recently added a trio of “smart displays” designed by LG, Lenovo and JBL.
A new premium category has also arisen, led by Apple’s first entry into the space, the HomePod. Google has similarly offered up the Home Max, and Samsung is set to follow suit with the upcoming Galaxy Home (which more or less looks like a HomePod on a tripod).
As all of the above players were no doubt hoping, smart speaker sales also appear to be driving sales of smart home products, with 19 percent of U.S. consumers planning to purchase one within the next year, according to the firm.
While the field of VR headsets used to be more or less limited to Oculus and Vive, numerous competitors have sprung up as the technology has matured — and some are out to beat the market leaders at their own game. StarVR’s latest headset brings eye-tracking and a seriously expanded field of view to the game, and the latter especially is a treat to experience.
The company announced the new hardware at SIGGRAPH in Vancouver, where I got to go hands-on and eyes-in with the headset. Before you get too excited, though, keep in mind this set is meant for commercial applications — car showrooms, aircraft simulators and so on. What that means is it’s going to be expensive and not as polished a user experience as consumer-focused sets.
That said, the improvements present in the StarVR One are significant and immediately obvious. Most important is probably the expanded FOV — 210 degrees horizontal and 130 vertical. That’s nearly twice as wide as the 110 degrees wide that the most popular headsets have, and believe me, it makes a difference. (I haven’t tried the Pimax 8K, which has a similarly wide FOV.)
On Vive and Oculus sets I always had the feeling that I was looking through a hole into the VR world — a large hole, to be sure, but having your peripheral vision be essentially blank made it a bit claustrophobic.
In the StarVR headset, I felt like the virtual environment was actually around me, not just in front of me. I moved my eyes around much more rather than turning my head, with no worries about accidentally gazing at the fuzzy edge of the display. A 90 Hz refresh rate meant things were nice and smooth.
To throw shade at competitors, the demo I played (I was a giant cyber-ape defending a tower) could switch between the full FOV and a simulation of the 110-degree one found in other headsets. I suspect it was slightly exaggerated, but the difference really is clear.
It’s reasonably light and comfortable — no VR headset is really either. But it doesn’t feel as chunky as it looks.
The resolution of the custom AMOLED display is supposedly 5K. But the company declined to specify the actual resolution when I asked. They did, however, proudly proclaim full RGB pixels and 16 million sub-pixels.
Let’s do the math: 16 million divided by 3 makes around 5.3 million full pixels. 5K isn’t a real standard, just shorthand for having around 5,000 horizontal pixels between the two displays. Divide 5.3 million by that and you get 1060. Rounding those off to semi-known numbers gives us 2560 pixels (per eye) for the horizontal and 1080 for the vertical resolution.
That doesn’t fit the approximately 16:10 ratio of the field of view, but who knows? Let’s not get too bogged down in unknowns. Resolution isn’t everything — but generally, the more pixels the better.
The other major new inclusion is an eye-tracking system provided by Tobii. We knew eye-tracking in VR was coming; it was demonstrated at CES, and the Fove Kickstarter showed it was at least conceivable to integrate into a headset now-ish.
Unfortunately, the demos of eye-tracking were pretty limited (think a heat map of where you looked on a car) so, being hungry, I skipped them. The promise is good enough for now — eye tracking allows for all kinds of things, including a “foveated rendering” that focuses display power where you’re looking. This too was not being shown, however, and it strikes me that it is likely phenomenally difficult to pull off well — so it may be a while before we see a good demo of it.
One small but welcome improvement that eye-tracking also enables is automatic detection of intrapupillary distance, or IPD — it’s different for everyone and can be important to rendering the image correctly. One less thing to worry about.
The StarVR One is compatible with SteamVR tracking, or you can get the XT version and build your own optical tracking rig — that’s for the commercial providers for whom it’s an option.
Although this headset will be going to high-end commercial types, you can bet that the wide FOV and eye tracking in it will be standard in the next generation of consumer devices. Having tried most of the other headsets, I can say with certainty that I wouldn’t want to go back to some of them after having experienced this one. VR is still a long way off from convincing me it’s worthwhile, but major improvements like these definitely help.
Cytera CellWorks hopes to revolutionize the so-called “clean meat” industry through the automation of cell cultures — and that could mean one day, if all goes to plan, the company’s products could be in every grocery store in America.
Cytera is a ways off from that happening, though. Founded in 2017 by two college students in the U.K., Ignacio Willats and Ali Afshar, Cytera uses robotic automation to configure cell cultures used in things like growing turkey meat from a petri dish or testing stem cells.
The two founders — Willats, the events and startups guy and Afshar the scientist, like to do things differently to better configure the lab, as well — like strapping GoPros to lab workers’ heads, for instance. The two came together at the Imperial College of London to run an event for automation in the lab and from there formed their friendship and their company.
“At the time, lab automation felt suboptimal,” Afshar told TechCrunch, further explaining he wanted to do something with a higher impact.
Cellular agriculture, or growing animal cells in a lab, seems to hit that button and the two are currently enrolled in Y Combinator’s Summer 2018 cohort to help them get to the next step.
There’s been an explosion in the lab-made meat industry, which relies on taking a biopsy of animal cells and then growing them in a lab to make the meat versus getting it from an actual living, breathing animal. In just the last couple of years startups like Memphis Meats have started to pop up, offering lab meat to restaurants. Even the company known for its vegan mayo products, Hampton Creek (now called Just), is creating a lab-grown foie gras.
Originally, the company was going to go for general automation in the lab, but had enough interest from clients and potential business in just the cell culture automation aspect they changed the name for clarity. Cytera already has some promising prospects, too, including a leading gene therapy company the two couldn’t name just yet.
Of course, automation in the lab is nothing new and big pharma has already poured billions into it for drug discovery. One could imagine a giant pharma company teaming up with a meat company looking to get into the lab-made meat industry and doing something similar, but so far Willats and Afshar says they haven’t really seen that happening. They say bigger companies are much more likely to partner with smaller startups like theirs to get the job done.
Obviously, there are trade-offs at either end. But, should Cytera make it, you may find yourself eating a chicken breast one day built by a company who bought the cells made in the Cytera lab.
Twitter announced this afternoon it will begin booting accounts off its service from those who have tried to evade their account suspension. The company says that the accounts in question are users who have been previously suspended on Twitter for their abusive behavior, or for trying to evade a prior suspension. These bad actors have been able to work around Twitter’s attempt to remove them by setting up another account, it seems.
The company says the new wave of suspensions will hit this week and will continue in the weeks ahead, as it’s able to identify others who are “attempting to Tweet following an account suspension.”
Twitter’s announcement on the matter – which came in the form of a tweet – was light on details. We asked the company for more information. It’s unclear, for example, how Twitter was able to identify the same persons had returned to Twitter, how many users will be affected by this new ban, or what impact this will have on Twitter’s currently stagnant user numbers.
Twitter was not able to answer our questions, when asked for comment.
The company has been more recently focused on aggressively suspending accounts, as part of the effort to stem the flow of disinformation, bots, and abuse on its service. The Washington Post, for example, said last month that Twitter had suspended as many as 70 million accounts between the months of May and June, and was continuing in July at the same pace. The removal of these accounts didn’t affect the company’s user metrics, Twitter’s CFO later clarified.
Even though they weren’t a factor, Twitter’s user base is shrinking. The company actually lost a million monthly active users in Q2, with 335 million overall users and 68 million in the U.S. In part, Twitter may be challenged in growing its audience because it’s not been able to get a handle on the rampant abuse on its platform, and because it makes poor enforcement decisions with regard to its existing policies.
For instance, Twitter is under fire right now for the way it chooses who to suspend, as it’s one of the few remaining platforms that hasn’t taken action against conspiracy theorist Alex Jones.
The Outline even hilariously (???) suggested today that we all abandon Twitter and return to Tumblr. (Disclosure: Oath owns Tumblr and TC. I don’t support The Outline’s plan. Twitter should just fix itself, even if that requires new leadership.)
In any event, today’s news isn’t about a change in how Twitter will implement its rules, but rather in how it will enforce the bans it’s already chosen to enact.
In many cases, banned users would simply create a new account using a new email address and then continue to tweet. Twitter’s means of identifying returning users has been fairly simplistic in the past. To make sure banned users didn’t come back, it used information like the email, phone and IP address to identify them.
For it to now be going after a whole new lot of banned accounts who have been attempting to avoid their suspensions, Twitter may be using the recently acquired technology from anti-abuse firm Smyte. At the time of the deal, Twitter had praised Smyte’s proactive anti-abuse systems, and said it would soon put them to work.
This system may pick up false positives, of course – and that could be why Twitter noted that some accounts could be banned in error in the weeks ahead.
Reached for comment, Twitter declined to answer our specific questions and said it could also not go into further details as that would give those attempting to evade a suspension more insight into its detection methods.
“This is a step we’re taking to further refine our work and close existing gaps we identified,” a spokesperson said. “This is specifically targeting those previously suspended for abusive behavior. Nothing to share on amount of accounts impacted since this work will remain ongoing, not just today.”
Updated, 8/14/18, 3:51 PM ET with Twitter’s comment.
Disrupt SF is right around the corner, which means startupland is prepping to congregate once again in the city for another epic run of investors, startups and celebrities. This year, Disrupt is heading to Moscone West, so the event will be bigger and better than ever.
And I have some good news for you. Initialized Capital’s Garry Tan will join Connie Loizos and Alex Wilhelm live on the Showcase Stage at 3 pm on Thursday, September 6, to dig through the latest, greatest and worst from the world of venture capital.
That’s right, you can come to Disrupt and watch us sit on tall stools holding mics while we talk about the week’s money news in front of a bustling crowd of onlookers. Live tapings are fun because we can’t run the intro a second time if we mess it up. So come on down and hang out with us. Alex may even wear a shirt with buttons.
And it gets better. If you want to obtain a discounted ticket to Disrupt (and why wouldn’t you?), head to the ticket page and use the code “EQUITY” to get 15 percent off. Come for Equity and stay to see Aileen Lee, Reid Hoffman, Drew Houston, Anne Wojcicki, Arlan Hamilton, Ashton Kutcher, Mike Judge and so very many more people you’ve heard of on the Disrupt stage. To whet your appetite until the big show begins, click here to see the full agenda. It’s a good one. See you at Disrupt!
HQ Trivia’s app store ranking has continued to sink the past three months, but it’s hoping a new version on your television could revitalize growth. HQ today launched an Apple TV app that lets users play the twice-daily live quiz game alongside iOS Android players. “Everything about the game is still the same – same questions, same time, same rules,” says a spokesperson, except you’ll play with the Apple TV remote instead of your phone’s screen. But that might not be enough to get HQ’s player count rapidly growing again.
According to App Annie’s app store ranking history, on iOS HQ has fallen from the No. 1 U.S. trivia game to No. 10, from the No. 44 game to No. 196, and from the No. 151 overall app to No. 585. It’s exhibited a similar decline on Android. Analytics firm Sensor Tower estimates HQ has seen 12.5 million lifetime installs by unique users, with about 68 percent on iOS. “Installs have been on the decline. For last month, we estimate them with about 560K, which is down from their height of more than two million per month back in February,” Sensor Tower’s head of mobile insights Randy Nelson tells TechCrunch.
The question is whether this is just a summer lull as people spend time outside and students aren’t locked in the schedule of school, or if HQ is in a downward spiral beyond seasonal fluctuations. But if we zoom out, you can see that HQ has been dropping down the charts through the school year since peaking in January. At one point it climbed as high as the No. 3 game and No. 6 overall app. The app’s record high of concurrent players has also declined from a peak of 2.38 million in late March.
[Update: The CEO of HQ Trivia parent company Intermedia Labs and the former co-founder of Vine, Rus Yusupov, weighed in on the decline in downloads and HQ’s plans. He says, “Games are a hits business and don’t grow exponentially forever,” signalling the drop-off was expected and the team is still optimistic. But he also notes that HQ is “developing new game formats, one of which we think is really special and complements Trivia nicely”, indicating that HQ will branch out beyond its 12-question everyone vs everyone approach.]
Stay 2 nights in 3* Bear of Rodborough Hotel and recieve dinner on 1st night and breakfast both mornings.
Start: 13 Jun 2017 | End: 30 Apr 2018