Several dummy units of future iPhone models have leaked over the weekend. It gives a good idea of what you should expect to see in September when Apple introduces the next iPhone.
Most likely, the iPhones in these photos aren’t actual iPhones. They are just dummy units. Every year, a few manufacturers create objects that look exactly like future iPhones.
They are based on leaked design schematics and usually look just like the real thing. Case manufacturers and other accessory makers buy those dummy units to get ready before Apple’s announcement.
As you can see, the bigger device looks just like the existing iPhone X, but bigger. It’s a 6.5-inch second-gen iPhone X Plus. It has two cameras at the back and the familiar notch at the top of the display.
According to rumors, the second-gen iPhone X Plus could cost $999, or the same price as the iPhone X today. Apple could also update the regular 5.8-inch iPhone X with better components and a lower price — $899.
But what about that mysterious 6.1-inch iPhone?
2018 Apple iPhone, iPhone X, iPhone X Plus front panels pic.twitter.com/fGlzRH5Q6x
— Ben Geskin (@VenyaGeskin1) July 17, 2018
Apple wants to offer a more affordable iPhone with a notch for $700. Unlike the second-gen iPhone X and iPhone X Plus, this new iPhone could feature a slightly bigger bezel and an LCD display. OLED is still much more expensive than LCD, so it’s hard to roll it out across the entire lineup.
Apple could also put a single camera at the back of the device and use aluminum instead of stainless steel on the borders. Dimitri12 also shared photos of dummy units on Slashleaks that look like Geskin’s dummies:
When it comes to colors, analyst Ming-Chi Kuo reported that the cheaper model should come in many different colors — grey, white, blue, red and orange. The second-gen iPhone X and iPhone X Plus should come in black, white and gold.
Apple is expected to announce new iPhones in early September. So you should take those dummy units with a grain of salt.
Line, the company best-known for its popular Asian messaging app, is doubling down on games after it acquired a controlling stake in Korean studio NextFloor for an undisclosed amount.
NextFloor, which has produced titles like Dragon Flight and Destiny Child, will be merged with Line’s games division to form the Line Games subsidiary. Dragon Flight has racked up 14 million users since its 2012 launch — it clocked $1 million in daily revenue at peak. Destiny Child, a newer release in 2016, topped the charts in Korea and has been popular in Japan, North America and beyond.
Line’s own games are focused on its messaging app, which gives them access to social features such as friend graphs, and they have helped the company become a revenue generation machine. Alongside income from its booming sticker business, in-app purchases within games made Line Japan’s highest-earning non-game app publisher last year, according to App Annie, and the fourth highest worldwide. For some insight into how prolific it has been over the years, Line is ranked as the sixth highest earning iPhone app of all time.
But, despite revenue success, Line has struggled to become a global messaging giant. The big guns WhatsApp and Facebook Messenger have in excess of one billion monthly users each, while Line has been stuck around the 200 million mark for some time. Most of its numbers are from just four countries: Japan, Taiwan, Thailand and Indonesia. While it has been able to tap those markets with additional services like ride-hailing and payments, it is certainly under pressure from those more internationally successful competitors.
With that in mind, doubling down on games makes sense and Line said it plans to focus on non-mobile platforms, which will include the Nintendo Switch among others consoles, from the second half of this year.
Line went public in 2016 via a dual U.S.-Japan IPO that raised over $1 billion.
iOS users have been reporting problems with excessive battery drain after updating to iOS 11.4.
The update to Apple’s mobile operating system was released at the end of May, adding support for Messages in iCloud, plus some media and entertainment features, such as AirPlay 2 and support for two HomePod speakers to work as a stereo pair.
Safe to say, radically reduced battery life was not among the listed additions.
This TC writer also noticed an alarming depreciation in battery performance after updating to iOS 11.4 at the end of last month — with the battery level dropping precipitously even when the handset was left untouched doing nothing.
Looks like updating to iOS 11.4 has decimated iPhone battery life Some iOS users report that 11.4 update triggers excessive battery drain
iOS users have been reporting problems with excessive battery drain after updating to iOS 11.4. On Sunday, 9to5Mac reported on a raft of posts on Apple forums complaining about excessive battery drain since updating. ZDNet also flagged complaints around the issue early last month. The update to Apple’s mobile operating system was released at the end […]
At WWDC, Apple pitched Shortcuts as a way to ”take advantage of the power of apps” and “expose quick actions to Siri.” These will be suggested by the OS, can be given unique voice commands and will even be customizable with a dedicated Shortcuts app.
But to ignore Shortcuts would be missing out on the bigger picture. Apple’s strengths have always been the device ecosystem and the apps that run on them.
With Shortcuts, both play a major role in how Siri will prove to be a truly useful assistant and not just a digital voice to talk to.
Your Apple devices just got better
For many, voice assistants are a nice-to-have, but not a need-to-have.
It’s undeniably convenient to get facts by speaking to the air, turning on the lights without lifting a finger or triggering a timer or text message — but so far, studies have shown people don’t use much more than these on a regular basis.
People don’t often do more than that because the assistants aren’t really ready for complex tasks yet, and when your assistant is limited to tasks inside your home or commands spoken into your phone, the drawbacks prevent you from going deep.
If you prefer Alexa, you get more devices, better reliability and a breadth of skills, but there’s not a great phone or tablet experience you can use alongside your Echo. If you prefer to have Google’s Assistant everywhere, you must be all in on the Android and Home ecosystem to get the full experience, too.
Plus, with either option, there are privacy concerns baked into how both work on a fundamental level — over the web.
In Apple’s ecosystem, you have Siri on iPhone, iPad, Apple Watch, AirPods, HomePod, CarPlay and any Mac. Add in Shortcuts on each of those devices (except Mac, but they still have Automator) and suddenly you have a plethora of places to execute all your commands entirely by voice.
Each accessory that Apple users own will get upgraded, giving Siri new ways to fulfill the 10 billion and counting requests people make each month (according to Craig Federighi’s statement onstage at WWDC).
But even more important than all the places where you can use your assistant is how — with Shortcuts, Siri gets even better with each new app that people download. There’s the other key difference: the App Store.
Actions are the most important part of your apps
iOS has always had a vibrant community of developers who create powerful, top-notch applications that push the system to its limits and take advantage of the ever-increasing power these mobile devices have.
Shortcuts opens up those capabilities to Siri — every action you take in an app can be shared out with Siri, letting people interact right there inline or using only their voice, with the app running everything smoothly in the background.
Plus, the functional approach that Apple is taking with Siri creates new opportunities for developers to provide utility to people instead of requiring their attention. The suggestions feature of Shortcuts rewards “acceleration,” showing the apps that provide the most time savings and use for the user more often.
This opens the door to more specialized types of apps that don’t necessarily have to grow a huge audience and serve them ads — if you can make something that helps people, Shortcuts can help them use your app more than ever before (and without as much effort). Developers can make a great experience for when people visit the app, but also focus on actually doing something useful too.
This isn’t a virtual assistant that lives in the cloud, but a digital helper that can pair up with the apps uniquely taking advantage of Apple’s hardware and software capabilities to truly improve your use of the device.
In the most groan-inducing way possible, “there’s an app for that” is back, and more important than ever. Not only are apps the centerpiece of the Siri experience, but it’s their capabilities that extend Siri’s — the better the apps you have, the better Siri can be.
Control is at your fingertips
Importantly, Siri gets all of this Shortcuts power while keeping the control in each person’s hands.
All of the information provided to the system is securely passed along by individual apps — if something doesn’t look right, you can just delete the corresponding app and the information is gone.
Siri will make recommendations based on activities deemed relevant by the apps themselves, as well, so over-active suggestions shouldn’t be common (unless you’re way too active in some apps, in which case they added Screen Time for you too).
Each of the voice commands is custom per user, as well, so people can ignore their apps suggestions and set up the phrases to their own liking. This means nothing is already “taken” because somebody signed up for the skill first (unless you’ve already used it yourself, of course).
Also, Shortcuts don’t require the web to work — the voice triggers might not work, but the suggestions and Shortcuts app give you a place to use your assistant voicelessly. And importantly, Shortcuts can use the full power of the web when they need to.
This user-centric approach paired with the technical aspects of how Shortcuts works, gives Apple’s assistant a leg up for any consumers who find privacy important. Essentially, Apple devices are only listening for “Hey Siri,” then the available Siri domains plus your own custom trigger phrases.
Without exposing your information to the world or teaching a robot to understand everything, Apple gave Siri a slew of capabilities that in many ways can’t be matched. With Shortcuts, it’s the apps, the operating system and the variety of hardware that will make Siri uniquely qualified come this fall.
Plus, the Shortcuts app will provide a deeper experience for those who want to chain together actions and customize their own shortcuts.
There’s lots more under the hood to experiment with, but this will allow anyone to tweak and prod their Siri commands until they have a small army of custom assistant tasks at the ready.
Hey Siri, let’s get started
Siri doesn’t know all, can’t perform any task you bestow upon it and won’t make somewhat uncanny phone calls on your behalf.
But instead of spending time conversing with a somewhat faked “artificial intelligence,” Shortcuts will help people use Siri as an actual digital assistant — a computer to help them get things done better than they might’ve otherwise.
With Siri’s new skills extending to each of your Apple products (except for Apple TV and the Mac, but maybe one day?), every new device you get and every new app you download can reveal another way to take advantage of what this technology can offer.
This broadening of Siri may take some time to get used to — it will be about finding the right place for it in your life.
As you go about your apps, you’ll start seeing and using suggestions. You’ll set up a few voice commands, then you’ll do something like kick off a truly useful shortcut from your Apple Watch without your phone connected and you’ll realize the potential.
This is a real digital assistant, your apps know how to work with it and it’s already on many of your Apple devices. Now, it’s time to actually make use of it.Thu, 05 Jul 2018 12:53:16 +0000
Apple’s Shortcuts will flip the switch on Siri’s potential
At WWDC, Apple pitched Shortcuts as a way to ”take advantage of the power of apps” and "expose quick actions to Siri.” These will be suggested by the OS, can be given unique voice commands and will even be customizable with a dedicated Shortcuts app.
According to a supply chain report, Apple is preparing to release three iPhone lines this fall. One, a 5.8-inch iPhone X with improved specs and lower price. Two, a new 6.5-inch iPhone X Plus with an OLED screen. And three, a 6.1-inch iPhone with Face ID, which is said to come in a variety of colors including grey, white, blue, red and orange.
Ming-Chi Kuo reports, via 9to5mac, that the 6.5-inch iPhone X Plus is said to take the $1000 price point from the iPhone X. This will cause the next iPhone X to be less expensive than its current incarnation. The colorful 6.1-inch iPhone will be the least expensive model with a price tag around $700. Information about storage was not included in the report.
The least-expensive iPhone is said to resemble the iPhone X and include FaceID though Apple might concede the dual-camera option to the higher price models. The analyst expects this $700 option to account for 55% of new iPhone sales and increase through 2019.
If the part about the colors is correct, Apple is set introduce a slash of color to the monochrome phone market. Currently, phones are mostly available in greys and blacks with most vendors offering a couple of color options through special editions. That’s boring. Apple tried this in the past with its budget-minded iPhone 5c. Making its best-selling model available in colors is a distinct shift in strategy. It’s highly likely other firms such as Samsung and LG will follow the trend and push the smartphone world into a rainbow of colors.Tue, 03 Jul 2018 14:50:29 +0000
Share on Twitter
According to a supply chain report, Apple is preparing to release three iPhone lines this fall. One, a 5.8-inch iPhone X with improved specs and lower price. Two, a new 6.5-inch iPhone X Plus with an OLED screen. And three, a 6.1-inch iPhone with Face ID, which is said to come in a variety of […]
Light, the company behind the wild L16 camera, is building a smartphone equipped with multiple cameras. According to The Washington Post, the company is prototyping a smartphone with five to nine cameras that’s capable of capturing a 64 megapixel shot.
The entire package is not much thicker than an iPhone X, the Post reports. The additional sensors are said to increase the phone’s low-light performance and depth effects and uses internal processing to stick the image together.
This is the logical end-point for Light. The company introduced the $1,950 L16 camera back in 2015 and starting shipping it in 2017. The camera uses 16 lenses to capture 52 megapixel imagery. The results are impressive, especially when the size of the camera is considered. It’s truly pocketable. Yet in the end, consumers want the convenience of a phone with the power of a dedicated camera.
Light is not alone in building a super cameraphone. Camera maker RED is nearing the release of its smartphone that rocks a modular lens system and can be used as a viewfinder for RED’s cinema cameras. Huawei also just released the P21 Pro that uses three lenses to give the user the best possible option for color, monochrome and zoom. Years ago, Nokia played with high megapixel phones, stuffing a 41 MP sensor in the Lumia 1020 and PureView 808.
Unfortunately, additional details about the Light phone are unavailable. It’s unclear when this phone will be released. We reached out to Light for comment and will update this report with its response.Mon, 02 Jul 2018 14:40:13 +0000
Next iPhone could be available in grey, white, blue, red and orange
Light, the company behind the wild L16 camera, is building a smartphone equipped with multiple cameras. According to The Washington Post, the company is prototyping a smartphone with five to nine cameras that’s capable of capturing a 64 megapixel shot. The entire package is not much thicker than an iPhone X, the Post reports. The additional […]
Right now, the cable that comes with a new iPhone does not plug into a new MacBook Pro without a dongle. #donglelife is for real. If this leak is correct, though, that wrong might soon be righted.
Photos have surfaced showing what is an engineering prototype of an Apple 18 W USB-C charger, which is supposedly to be bundled with the next iPhone. If correct, this will let owners take advantage of the iPhone’s fast charging capabilities without purchasing anything else. Plus, it will let users connect the iPhone to a MacBook Pro out of the box.
This rumor surfaced last year, too, though no photos ever surfaced to back up the claim.
If true, this adapter will mark the first major change in the iPhone’s wall charger. Apple has long bundled a 5W charger with the iPhone. It works fine, but does not supply the phone with the necessary power to charge at its fastest possible speed. Even if the photos here show something other than an official Apple product, chances are Apple is readying something similar. Previous leaks show something similar.
Apple included fast charging in the iPhone 8, iPhone 8 Plus and iPhone X but didn’t include the necessary charger to take advantage of the technology. Owners have to buy a third-party charger of the $50 30W charger from Apple.Mon, 02 Jul 2018 14:22:52 +0000
Light is building a smartphone with five to nine cameras
Right now, the cable that comes with a new iPhone does not plug into a new MacBook Pro without a dongle. #donglelife is for real. If this leak is correct, though, that wrong might soon be righted. Photos have surfaced showing what is an engineering prototype of an Apple 18 W USB-C charger, which is […]
Gather around, campers, and hear a tale as old as time.
Remember the HTC Dream? The Evo 4G? The Google Nexus One? What about the Touch Diamond? All amazing devices. The HTC of 2018 is not the HTC that made these industry-leading devices. That company is gone.
It seems HTC is getting ready to lay off nearly a quarter of its workforce by cutting 1,500 jobs in its manufacturing unit in Taiwan. After the cuts, HTC’s employee count will be less than 5,000 people worldwide. Five years ago, in 2013, HTC employed 19,000 people.
HTC started as a white label device maker giving carriers an option to sell devices branded with their name. The company also had a line of HTC-branded connected PDAs that competed in the nascent smartphone market. BlackBerry, or Research in Motion as it was called until 2013, ruled this phone segment, but starting around 2007 HTC began making inroads thanks to innovated touch devices that ran Windows Mobile 6.0.
In 2008 HTC introduced the Touch line with the Touch Diamond, Touch Pro, Touch 3G and Touch HD. These were stunning devices for the time. They were fast, loaded with big, user swappable batteries and microSD card slots. The Touch Pro even had a front-facing camera for video calls.
HTC overlayed a custom skin onto Windows Mobile making it a bit more palatable for the general user. At that time, Windows Mobile was competing with BlackBerry’s operating system and Nokia’s Symbian. None was fantastic, but Windows Mobile was by far the most daunting for new users. HTC did the best thing it could do and developed a smart skin that gave the phone a lot of features that would still be considered modern.
In 2008 HTC released the first Android device with Google. Called the HTC Dream or G1, the device was far from perfect. But the same could be said about the iPhone. This first Android phone set the stage for future wins from HTC, too. The company quickly followed up with the Hero, Droid Incredible, Evo 4G and, in 2010, the amazing Google Nexus One.
After the G1, HTC started skinning Android in the same fashion as it did Windows Mobile. It cannot be overstated how important this was for the adoption of Android. HTC’s user interface made Android usable and attractive. HTC helped make Android a serious competitor to Apple’s iOS.
In 2010 and 2011, Google turned to Samsung to make the second and third flagship Nexus phones. It was around this time Samsung started cranking out Android phones, and HTC couldn’t keep up. That’s not to say HTC didn’t make a go for it. The company kept releasing top-tier phones: the One X in 2012, the One Max in 2013 and the One (M8) in 2014. But it didn’t matter. Samsung had taken up the Android standard and was charging forward, leaving HTC, Sony and LG to pick from the scraps.
At the end of 2010, HTC was the leading smartphone vendor in the United States. In 2014 it trailed Apple, Samsung and LG with around a 6 percent market share in the U.S. In 2017 HTC captured 2.3 percent of smartphone subscribers and now in 2018, some reports peg HTC with less than a half percent of the smartphone market.
Google purchased a large chunk of HTC’s smartphone design talent in 2017 for $1.1 billion. The deal transferred more than 2,000 employees under Google’s tutelage. They will likely be charged with working on Google’s line of Pixel devices. It’s a smart move. This HTC team was responsible for releasing amazing devices that no one bought. But that’s not entirely their fault. Outside forces are to blame. HTC never stopped making top-tier devices.
The HTC of today is primarily focused on the Vive product line. And that’s a smart play. The HTC Vive is one of the best virtual reality platforms available. But HTC has been here before. Hopefully, it learned something from its mistakes in smartphones.Fri, 29 Jun 2018 15:53:11 +0000
This could be Apple’s next iPhone USB-C fast charger
Gather around, campers, and hear a tale as old as time. Remember the HTC Dream? The Evo 4G? The Google Nexus One? What about the Touch Diamond? All amazing devices. The HTC of 2018 is not the HTC that made these industry-leading devices. That company is gone. It seems HTC is getting ready to lay […]
Earlier today we revealed that Apple was re-building maps from the ground up. These are some questions from readers that came up when we went live. You can ask more questions here and I’ll try to add them.
What part of Maps will be new?
The actual map. Apple is building them from scratch, with its own data rather than relying on external partners.
What does that mean in terms of what I’ll see?
New foliage markers, showing you where ground cover like grass and trees exists more accurately. Pools, parking lots, exact building shapes, sports areas like baseball diamonds, tennis and basketball courts and pedestrian pathways that are commonly walked but previously unmapped. There are also some new features like the ability to determine where the entrances are to buildings based on maps data.
Will it look visually different?
Only with regards to additional detail. Maps is not getting a visual “overhaul” yet (it was implied that it will eventually) but you’ll notice differences immediately. Here’s an example:
Does it use information from iPhones?
Yes. It uses segments of trips you take that have been anonymized called probe data to determine things like “is this a valid route?” or to glean traffic congestion information.
Can I be identified by this data — does Apple know it’s me making the trips?
No. The only device that knows about your entire trip is your personal device. When information and/or requests are sent to Apple, a rotating random identifier is assigned to chunks of data, which are segmented for additional safety before transmission. Basically, all Apple will ever see is a random slice of any person’s trip without beginning or end connected directly, which it uses to update its maps and traffic info. Not only can it not tell who it came from, Apple says it cannot even reconstruct a trip based on this data — no matter who asks for it.
Can I opt out?
Yes. It will not happen if you do not turn on location services, and it can be toggled off in the Privacy settings for Maps. It’s not a new setting, it’s just the existing maps setting.
Will it use more data or battery?
Apple says no. It’s saying that the amount of both resources used are so negligible as to be swallowed up in normal efficiency gains.
When is it coming to the rest of the world?
Bay Area in beta next week and Northern California this fall were as much as I got; however, Apple SVP Eddy Cue did say that Apple’s overall maps team was global.
We’ve got a dedicated team — we started this four years ago — across a variety of fields from ML, to map design, to you name it. There’s thousands of people working on this all around the globe from here in the Bay Area, to Seattle, Austin, New York. We have people in other countries, in cities like Berlin, Paris, Singapore, Beijing, MalmFri, 29 Jun 2018 14:58:23 +0000
HTC is gone
Earlier today we revealed that Apple was re-building maps from the ground up. These are some questions from readers that came up when we went live. You can ask more questions here and I’ll try to add them. What part of Maps will be new? The actual map. Apple is building them from scratch, with […]
I’m not sure if you’re aware, but the launch of Apple Maps went poorly. After a rough first impression, an apology from the CEO, several years of patching holes with data partnerships and some glimmers of light with long-awaited transit directions and improvements in business, parking and place data, Apple Maps is still not where it needs to be to be considered a world-class service.
Maps needs fixing.
Apple, it turns out, is aware of this, so it’s re-building the maps part of Maps.
It’s doing this by using first-party data gathered by iPhones with a privacy-first methodology and its own fleet of cars packed with sensors and cameras. The new product will launch in San Francisco and the Bay Area with the next iOS 12 beta and will cover Northern California by fall.
Every version of iOS will get the updated maps eventually, and they will be more responsive to changes in roadways and construction, more visually rich depending on the specific context they’re viewed in and feature more detailed ground cover, foliage, pools, pedestrian pathways and more.
This is nothing less than a full re-set of Maps and it’s been four years in the making, which is when Apple began to develop its new data-gathering systems. Eventually, Apple will no longer rely on third-party data to provide the basis for its maps, which has been one of its major pitfalls from the beginning.
“Since we introduced this six years ago — we won’t rehash all the issues we’ve had when we introduced it — we’ve done a huge investment in getting the map up to par,” says Apple SVP Eddy Cue, who now owns Maps, in an interview last week. “When we launched, a lot of it was all about directions and getting to a certain place. Finding the place and getting directions to that place. We’ve done a huge investment of making millions of changes, adding millions of locations, updating the map and changing the map more frequently. All of those things over the past six years.”
But, Cue says, Apple has room to improve on the quality of Maps, something that most users would agree on, even with recent advancements.
“We wanted to take this to the next level,” says Cue. “We have been working on trying to create what we hope is going to be the best map app in the world, taking it to the next step. That is building all of our own map data from the ground up.”
In addition to Cue, I spoke to Apple VP Patrice Gautier and more than a dozen Apple Maps team members at its mapping headquarters in California this week about its efforts to re-build Maps, and to do it in a way that aligned with Apple’s very public stance on user privacy.
If, like me, you’re wondering whether Apple thought of building its own maps from scratch before it launched Maps, the answer is yes. At the time, there was a choice to be made about whether or not it wanted to be in the business of maps at all. Given that the future of mobile devices was becoming very clear, it knew that mapping would be at the core of nearly every aspect of its devices, from photos to directions to location services provided to apps. Decision made, Apple plowed ahead, building a product that relied on a patchwork of data from partners like TomTom, OpenStreetMap and other geo data brokers. The result was underwhelming.
Almost immediately after Apple launched Maps, it realized that it was going to need help and it signed on a bunch of additional data providers to fill the gaps in location, base map, point-of-interest and business data.
It wasn’t enough.
“We decided to do this just over four years ago. We said, ‘Where do we want to take Maps? What are the things that we want to do in Maps?’ We realized that, given what we wanted to do and where we wanted to take it, we needed to do this ourselves,” says Cue.
Because Maps are so core to so many functions, success wasn’t tied to just one function. Maps needed to be great at transit, driving and walking — but also as a utility used by apps for location services and other functions.
Cue says that Apple needed to own all of the data that goes into making a map, and to control it from a quality as well as a privacy perspective.
There’s also the matter of corrections, updates and changes entering a long loop of submission to validation to update when you’re dealing with external partners. The Maps team would have to be able to correct roads, pathways and other updating features in days or less, not months. Not to mention the potential competitive advantages it could gain from building and updating traffic data from hundreds of millions of iPhones, rather than relying on partner data.
Cue points to the proliferation of devices running iOS, now over a billion, as a deciding factor to shift its process.
“We felt like because the shift to devices had happened — building a map today in the way that we were traditionally doing it, the way that it was being done — we could improve things significantly, and improve them in different ways,” he says. “One is more accuracy. Two is being able to update the map faster based on the data and the things that we’re seeing, as opposed to driving again or getting the information where the customer’s proactively telling us. What if we could actually see it before all of those things?”
I query him on the rapidity of Maps updates, and whether this new map philosophy means faster changes for users.
“The truth is that Maps needs to be [updated more], and even are today,” says Cue. “We’ll be doing this even more with our new maps, [with] the ability to change the map in real time and often. We do that every day today. This is expanding us to allow us to do it across everything in the map. Today, there’s certain things that take longer to change.
“For example, a road network is something that takes a much longer time to change currently. In the new map infrastructure, we can change that relatively quickly. If a new road opens up, immediately we can see that and make that change very, very quickly around it. It’s much, much more rapid to do changes in the new map environment.”
So a new effort was created to begin generating its own base maps, the very lowest building block of any really good mapping system. After that, Apple would begin layering on living location data, high-resolution satellite imagery and brand new intensely high-resolution image data gathered from its ground cars until it had what it felt was a “best in class” mapping product.
There is only really one big company on earth that owns an entire map stack from the ground up: Google .
Apple knew it needed to be the other one. Enter the vans.
Apple vans spotted
Though the overall project started earlier, the first glimpse most folks had of Apple’s renewed efforts to build the best Maps product was the vans that started appearing on the roads in 2015 with “Apple Maps” signs on the side. Capped with sensors and cameras, these vans popped up in various cities and sparked rampant discussion and speculation.
The new Apple Maps will be the first time the data collected by these vans is actually used to construct and inform its maps. This is their coming out party.
Some people have commented that Apple’s rigs look more robust than the simple GPS + Camera arrangements on other mapping vehicles — going so far as to say they look more along the lines of something that could be used in autonomous vehicle training.
Apple isn’t commenting on autonomous vehicles, but there’s a reason the arrays look more advanced: they are.
Earlier this week I took a ride in one of the vans as it ran a sample route to gather the kind of data that would go into building the new maps. Here’s what’s inside.
In addition to a beefed-up GPS rig on the roof, four LiDAR arrays mounted at the corners and eight cameras shooting overlapping high-resolution images, there’s also the standard physical measuring tool attached to a rear wheel that allows for precise tracking of distance and image capture. In the rear there is a surprising lack of bulky equipment. Instead, it’s a straightforward Mac Pro bolted to the floor, attached to an array of solid state drives for storage. A single USB cable routes up to the dashboard where the actual mapping-capture software runs on an iPad.
While mapping, a driver…drives, while an operator takes care of the route, ensuring that a coverage area that has been assigned is fully driven, as well as monitoring image capture. Each drive captures thousands of images as well as a full point cloud (a 3D map of space defined by dots that represent surfaces) and GPS data. I later got to view the raw data presented in 3D and it absolutely looks like the quality of data you would need to begin training autonomous vehicles.
More on why Apple needs this level of data detail later.
When the images and data are captured, they are then encrypted on the fly and recorded on to the SSDs. Once full, the SSDs are pulled out, replaced and packed into a case, which is delivered to Apple’s data center, where a suite of software eliminates from the images private information like faces, license plates and other info. From the moment of capture to the moment they’re sanitized, they are encrypted with one key in the van and the other key in the data center. Technicians and software that are part of its mapping efforts down the pipeline from there never see unsanitized data.
This is just one element of Apple’s focus on the privacy of the data it is utilizing in New Maps.
Probe data and privacy
Throughout every conversation I have with any member of the team throughout the day, privacy is brought up, emphasized. This is obviously by design, as Apple wants to impress upon me as a journalist that it’s taking this very seriously indeed, but it doesn’t change the fact that it’s evidently built in from the ground up and I could not find a false note in any of the technical claims or the conversations I had.
Indeed, from the data security folks to the people whose job it is to actually make the maps work well, the constant refrain is that Apple does not feel that it is being held back in any way by not hoovering every piece of customer-rich data it can, storing and parsing it.
The consistent message is that the team feels it can deliver a high-quality navigation, location and mapping product without the directly personal data used by other platforms.
“We specifically don’t collect data, even from point A to point B,” notes Cue. “We collect data — when we do it — in an anonymous fashion, in subsections of the whole, so we couldn’t even say that there is a person that went from point A to point B. We’re collecting the segments of it. As you can imagine, that’s always been a key part of doing this. Honestly, we don’t think it buys us anything [to collect more]. We’re not losing any features or capabilities by doing this.”
The segments that he is referring to are sliced out of any given person’s navigation session. Neither the beginning or the end of any trip is ever transmitted to Apple. Rotating identifiers, not personal information, are assigned to any data or requests sent to Apple and it augments the “ground truth” data provided by its own mapping vehicles with this “probe data” sent back from iPhones.
Because only random segments of any person’s drive is ever sent and that data is completely anonymized, there is never a way to tell if any trip was ever a single individual. The local system signs the IDs and only it knows to whom that ID refers. Apple is working very hard here to not know anything about its users. This kind of privacy can’t be added on at the end, it has to be woven in at the ground level.
Because Apple’s business model does not rely on it serving to you, say, an ad for a Chevron on your route, it doesn’t need to even tie advertising identifiers to users.
Any personalization or Siri requests are all handled on-board by the iOS device’s processor. So if you get a drive notification that tells you it’s time to leave for your commute, that’s learned, remembered and delivered locally, not from Apple’s servers.
That’s not new, but it’s important to note given the new thing to take away here: Apple is flipping on the power of having millions of iPhones passively and actively improving their mapping data in real time.
In short: Traffic, real-time road conditions, road systems, new construction and changes in pedestrian walkways are about to get a lot better in Apple Maps.
The secret sauce here is what Apple calls probe data. Essentially little slices of vector data that represent direction and speed transmitted back to Apple completely anonymized with no way to tie it to a specific user or even any given trip. It’s reaching in and sipping a tiny amount of data from millions of users instead, giving it a holistic, real-time picture without compromising user privacy.
If you’re driving, walking or cycling, your iPhone can already tell this. Now if it knows you’re driving, it also can send relevant traffic and routing data in these anonymous slivers to improve the entire service. This only happens if your Maps app has been active, say you check the map, look for directions, etc. If you’re actively using your GPS for walking or driving, then the updates are more precise and can help with walking improvements like charting new pedestrian paths through parks — building out the map’s overall quality.
All of this, of course, is governed by whether you opted into location services, and can be toggled off using the maps location toggle in the Privacy section of settings.
Apple says that this will have a near zero effect on battery life or data usage, because you’re already using the ‘maps’ features when any probe data is shared and it’s a fraction of what power is being drawn by those activities.
From the point cloud on up
But maps cannot live on ground truth and mobile data alone. Apple is also gathering new high-resolution satellite data to combine with its ground truth data for a solid base map. It’s then layering satellite imagery on top of that to better determine foliage, pathways, sports facilities, building shapes and pathways.
After the downstream data has been cleaned up of license plates and faces, it gets run through a bunch of computer vision programming to pull out addresses, street signs and other points of interest. These are cross referenced to publicly available data like addresses held by the city and new construction of neighborhoods or roadways that comes from city planning departments.
But one of the special sauce bits that Apple is adding to the mix of mapping tools is a full-on point cloud that maps in 3D the world around the mapping van. This allows them all kinds of opportunities to better understand what items are street signs (retro-reflective rectangular object about 15 feet off the ground? Probably a street sign) or stop signs or speed limit signs.
It seems like it also could enable positioning of navigation arrows in 3D space for AR navigation, but Apple declined to comment on “any future plans” for such things.
Apple also uses semantic segmentation and Deep Lambertian Networks to analyze the point cloud coupled with the image data captured by the car and from high-resolution satellites in sync. This allows 3D identification of objects, signs, lanes of traffic and buildings and separation into categories that can be highlighted for easy discovery.
The coupling of high-resolution image data from car and satellite, plus a 3D point cloud, results in Apple now being able to produce full orthogonal reconstructions of city streets with textures in place. This is massively higher-resolution and easier to see, visually. And it’s synchronized with the “panoramic” images from the car, the satellite view and the raw data. These techniques are used in self-driving applications because they provide a really holistic view of what’s going on around the car. But the ortho view can do even more for human viewers of the data by allowing them to “see” through brush or tree cover that would normally obscure roads, buildings and addresses.
This is hugely important when it comes to the next step in Apple’s battle for supremely accurate and useful Maps: human editors.
Apple has had a team of tool builders working specifically on a toolkit that can be used by human editors to vet and parse data, street by street. The editor’s suite includes tools that allow human editors to assign specific geometries to flyover buildings (think Salesforce tower’s unique ridged dome) that allow them to be instantly recognizable. It lets editors look at real images of street signs shot by the car right next to 3D reconstructions of the scene and computer vision detection of the same signs, instantly recognizing them as accurate or not.
Another tool corrects addresses, letting an editor quickly move an address to the center of a building, determine whether they’re misplaced and shift them around. It also allows for access points to be set, making Apple Maps smarter about the “last 50 feet” of your journey. You’ve made it to the building, but what street is the entrance actually on? And how do you get into the driveway? With a couple of clicks, an editor can make that permanently visible.
When you look at places like San Francisco or big cities from that standpoint,” says Cue, “you have addresses where the address name is a certain street, but really, the entrance in the building is on another street. They’ve done that because they want the better street name. Those are the kinds of things that our new Maps really is going to shine on. We’re going to make sure that we’re taking you to exactly the right place, not a place that might be really close by.”
Water, swimming pools (new to Maps entirely), sporting areas and vegetation are now more prominent and fleshed out thanks to new computer vision and satellite imagery applications. So Apple had to build editing tools for those, as well.
Many hundreds of editors will be using these tools, in addition to the thousands of employees Apple already has working on maps, but the tools had to be built first, now that Apple is no longer relying on third parties to vet and correct issues.
And the team also had to build computer vision and machine learning tools that allow it to determine whether there are issues to be found at all.
Anonymous probe data from iPhones, visualized, looks like thousands of dots, ebbing and flowing across a web of streets and walkways, like a luminescent web of color. At first, chaos. Then, patterns emerge. A street opens for business, and nearby vessels pump orange blood into the new artery. A flag is triggered and an editor looks to see if a new road needs a name assigned.
A new intersection is added to the web and an editor is flagged to make sure that the left turn lanes connect correctly across the overlapping layers of directional traffic. This has the added benefit of massively improved lane guidance in the new Apple Maps.
Apple is counting on this combination of human and AI flagging to allow editors to first craft base maps and then also maintain them as the ever-changing biomass wreaks havoc on roadways, addresses and the occasional park.
Here there be Helvetica
Apple’s new Maps, like many other digital maps, display vastly differently depending on scale. If you’re zoomed out, you get less detail. If you zoom in, you get more. But Apple has a team of cartographers on staff that work on more cultural, regional and artistic levels to ensure that its Maps are readable, recognizable and useful.
These teams have goals that are at once concrete and a bit out there — in the best traditions of Apple pursuits that intersect the technical with the artistic.
The maps need to be usable, but they also need to fulfill cognitive goals on cultural levels that go beyond what any given user might know they need. For instance, in the U.S., it is very common to have maps that have a relatively low level of detail even at a medium zoom. In Japan, however, the maps are absolutely packed with details at the same zoom, because that increased information density is what is expected by users.
This is the department of details. They’ve reconstructed replicas of hundreds of actual road signs to make sure that the shield on your navigation screen matches the one you’re seeing on the highway road sign. When it comes to public transport, Apple licensed all of the type faces that you see on your favorite subway systems, like Helvetica for NYC. And the line numbers are in the exact same order that you’re going to see them on the platform signs.
It’s all about reducing the cognitive load that it takes to translate the physical world you have to navigate into the digital world represented by Maps.
The new version of Apple Maps will be in preview next week with just the Bay Area of California going live. It will be stitched seamlessly into the “current” version of Maps, but the difference in quality level should be immediately visible based on what I’ve seen so far.
Better road networks, more pedestrian information, sports areas like baseball diamonds and basketball courts, more land cover, including grass and trees, represented on the map, as well as buildings, building shapes and sizes that are more accurate. A map that feels more like the real world you’re actually traveling through.
Search is also being revamped to make sure that you get more relevant results (on the correct continents) than ever before. Navigation, especially pedestrian guidance, also gets a big boost. Parking areas and building details to get you the last few feet to your destination are included, as well.
What you won’t see, for now, is a full visual redesign.
“You’re not going to see huge design changes on the maps,” says Cue. “We don’t want to combine those two things at the same time because it would cause a lot of confusion.”
Apple Maps is getting the long-awaited attention it really deserves. By taking ownership of the project fully, Apple is committing itself to actually creating the map that users expected of it from the beginning. It’s been a lingering shadow on iPhones, especially, where alternatives like Google Maps have offered more robust feature sets that are so easy to compare against the native app but impossible to access at the deep system level.
The argument has been made ad nauseam, but it’s worth saying again that if Apple thinks that mapping is important enough to own, it should own it. And that’s what it’s trying to do now.
“We don’t think there’s anybody doing this level of work that we’re doing,” adds Cue. “We haven’t announced this. We haven’t told anybody about this. It’s one of those things that we’ve been able to keep pretty much a secret. Nobody really knows about it. We’re excited to get it out there. Over the next year, we’ll be rolling it out, section by section in the U.S.”Tue, 19 Jun 2018 09:16:16 +0000
Questions about Apple’s new Maps, answered
I’m not sure if you’re aware, but the launch of Apple Maps went poorly. After a rough first impression, an apology from the CEO, several years of patching holes with data partnerships and some glimmers of light with long-awaited transit directions and improvements in business, parking and place data, Apple Maps is still not where it […]
Apple has been fined AUS$9M (~$6.6M) by a court in Australia following a legal challenge by a consumer rights group related to the company’s response after iOS updates bricked devices that had been repaired by third parties.
The Australian Competitor and Consumer Commission (ACCC) invested a series of complaints relating to an error (‘error 53’) which disabled some iPhones and iPads after owners downloaded an update to Apple’s iOS operating system.
The ACCC says Apple admitted that, between February 2015 and February 2016 — via the Apple US’ website, Apple Australia’s staff in-store and customer service phone calls — it had informed at least 275 Australian customers affected by error 53 that they were no longer eligible for a remedy if their device had been repaired by a third party.
The court judged Apple’s action to have breached the Australian consumer law.
“If a product is faulty, customers are legally entitled to a repair or a replacement under the Australian Consumer Law, and sometimes even a refund. Apple’s representations led customers to believe they’d be denied a remedy for their faulty device because they used a third party repairer,” said ACCC commissioner Sarah Court in a statement.
“The Court declared the mere fact that an iPhone or iPad had been repaired by someone other than Apple did not, and could not, result in the consumer guarantees ceasing to apply, or the consumer’s right to a remedy being extinguished.”
The ACCC notes that after it notified Apple about its investigation, the company implemented an outreach program to compensate individual consumers whose devices were made inoperable by error 53. It says this outreach program was extended to approximately 5,000 consumers.
It also says Apple Australia offered a court enforceable undertaking to improve staff training, audit information about warranties and Australian Consumer Law on its website, and improve its systems and procedures to ensure future compliance with the law.
The ACCC further notes that a concern addressed by the undertaking is that Apple was allegedly providing refurbished goods as replacements, after supplying a good which suffered a major failure — saying Apple has committed to provide new replacements in those circumstances if the consumer requests one.
“If people buy an iPhone or iPad from Apple and it suffers a major failure, they are entitled to a refund. If customers would prefer a replacement, they are entitled to a new device as opposed to refurbished, if one is available,” said Court.
The court also held the Apple parent company, Apple US, responsible for the conduct of its Australian subsidiary. “Global companies must ensure their returns policies are compliant with the Australian Consumer Law, or they will face ACCC action,” added Court.
We’ve reached out to Apple for comment on the court decision and will update this post with any response.
A company spokeswoman told Reuters it had had “very productive conversations with the ACCC about this” but declined to comment further on the court finding.
More recently, Apple found itself in hot water with consumer groups around the world over its use of a power management feature that throttled performance on older iPhones to avoid unexpected battery shutdowns.
The company apologized in December for not being more transparent about the feature, and later said it would add a control allowing consumers to turn it off if they did not want their device’s performance to be impacted.Sun, 17 Jun 2018 19:30:40 +0000
Apple is rebuilding Maps from the ground up
Apple has been fined AUS$9M (~$6.6M) by a court in Australia following a legal challenge by a consumer rights group related to the company’s response after iOS updates bricked devices that had been repaired by third parties. The Australian Competitor and Consumer Commission (ACCC) invested a series of complaints relating to an error (‘error 53’) which […]Grant Miller is the co-founder of Replicated
As we enter the twentieth year of Salesforce, there’s an interesting opportunity to reflect back on the change that Marc Benioff created with the software-as-a-service (SaaS) model for enterprise software with his launch of Salesforce.com.
This model has been validated by the annual revenue stream of SaaS companies, which is fast approaching $100 billion by most estimates, and it will likely continue to transform many slower-moving industries for years to come.
However, for the cornerstone market in IT — large enterprise-software deals — SaaS represents less than 25 percent of total revenue, according to most market estimates. This split is even evident in the most recent high-profile “SaaS” acquisition of GitHub by Microsoft, with more than 50 percent of GitHub’s revenue coming from the sale of their on-prem offering, GitHub Enterprise.
Data privacy and security is also becoming a major issue, with Benioff himself even pushing for a U.S. privacy law on par with GDPR in the European Union. While consumer data is often the focus of such discussions, it’s worth remembering that SaaS providers store and process an incredible amount of personal data on behalf of their customers, and the content of that data goes well beyond email addresses for sales leads.
It’s time to reconsider the SaaS model in a modern context, integrating developments of the last nearly two decades so that enterprise software can reach its full potential. More specifically, we need to consider the impact of IaaS and “cloud-native computing” on enterprise software, and how they’re blurring the lines between SaaS and on-premises applications. As the world around enterprise software shifts and the tools for building it advance, do we really need such stark distinctions about what can run where?
The original cloud software thesis
In his book, Behind the Cloud, Benioff lays out four primary reasons for the introduction of the cloud-based SaaS model:
- Realigning vendor success with customer success by creating a subscription-based pricing model that grows with each customer’s usage (providing the opportunity to “land and expand”). Previously, software licenses often cost millions of dollars and were paid upfront, each year after which the customer was obligated to pay an additional 20 percent for support fees. This traditional pricing structure created significant financial barriers to adoption and made procurement painful and elongated.
- Putting software in the browser to kill the client-server enterprise software delivery experience. Benioff recognized that consumers were increasingly comfortable using websites to accomplish complex tasks. By utilizing the browser, Salesforce avoided the complex local client installation and allowed its software to be accessed anywhere, anytime and on any device.
- Sharing the cost of expensive compute resources across multiple customers by leveraging a multi-tenant architecture. This ensured that no individual customer needed to invest in expensive computing hardware required to run a given monolithic application. For context, in 1999, a gigabyte of RAM cost about $1,000 and a TB of disk storage was $30,000. Benioff cited a typical enterprise hardware purchase of $385,000 in order to run Siebel’s CRM product that might serve 200 end-users.
- Democratizing the availability of software by removing the installation, maintenance and upgrade challenges. Drawing from his background at Oracle, he cited experiences where it took 6-18 months to complete the installation process. Additionally, upgrades were notorious for their complexity and caused significant downtime for customers. Managing enterprise applications was a very manual process, generally with each IT org becoming the ops team executing a physical run-book for each application they purchased.
These arguments also happen to be, more or less, the same ones made by infrastructure-as-a-service (IaaS) providers such as Amazon Web Services during their early days in the mid-late ’00s. However, IaaS adds value at a layer deeper than SaaS, providing the raw building blocks rather than the end product. The result of their success in renting cloud computing, storage and network capacity has been many more SaaS applications than ever would have been possible if everybody had to follow the model Salesforce did several years earlier.
Suddenly able to access computing resources by the hour — and free from large upfront capital investments or having to manage complex customer installations — startups forsook software for SaaS in the name of economics, simplicity and much faster user growth.
It’s a different IT world in 2018
Fast-forward to today, and in some ways it’s clear just how prescient Benioff was in pushing the world toward SaaS. Of the four reasons laid out above, Benioff nailed the first two:
- Subscription is the right pricing model: The subscription pricing model for software has proven to be the most effective way to create customer and vendor success. Years ago already, stalwart products like Microsoft Office and the Adobe Suite successfully made the switch from the upfront model to thriving subscription businesses. Today, subscription pricing is the norm for many flavors of software and services.
- Better user experience matters: Software accessed through the browser or thin, native mobile apps (leveraging the same APIs and delivered seamlessly through app stores) have long since become ubiquitous. The consumerization of IT was a real trend, and it has driven the habits from our personal lives into our business lives.
In other areas, however, things today look very different than they did back in 1999. In particular, Benioff’s other two primary reasons for embracing SaaS no longer seem so compelling. Ironically, IaaS economies of scale (especially once Google and Microsoft began competing with AWS in earnest) and software-development practices developed inside those “web scale” companies played major roles in spurring these changes:
- Computing is now cheap: The cost of compute and storage have been driven down so dramatically that there are limited cost savings in shared resources. Today, a gigabyte of RAM is about $5 and a terabyte of disk storage is about $30 if you buy them directly. Cloud providers give away resources to small users and charge only pennies per hour for standard-sized instances. By comparison, at the same time that Salesforce was founded, Google was running on its first data center — with combined total compute and RAM comparable to that of a single iPhone X. That is not a joke.
- Installing software is now much easier: The process of installing and upgrading modern software has become automated with the emergence of continuous integration and deployment (CI/CD) and configuration-management tools. With the rapid adoption of containers and microservices, cloud-native infrastructure has become the de facto standard for local development and is becoming the standard for far more reliable, resilient and scalable cloud deployment. Enterprise software packed as a set of Docker containers orchestrated by Kubernetes or Docker Swarm, for example, can be installed pretty much anywhere and be live in minutes.
What Benioff didn’t foresee
Several other factors have also emerged in the last few years that beg the question of whether the traditional definition of SaaS can really be the only one going forward. Here, too, there’s irony in the fact that many of the forces pushing software back toward self-hosting and management can be traced directly to the success of SaaS itself, and cloud computing in general:
- Cloud computing can now be “private”: Virtual private clouds (VPCs) in the IaaS world allow enterprises to maintain root control of the OS, while outsourcing the physical management of machines to providers like Google, DigitalOcean, Microsoft, Packet or AWS. This allows enterprises (like Capital One) to relinquish hardware management and the headache it often entails, but retain control over networks, software and data. It is also far easier for enterprises to get the necessary assurance for the security posture of Amazon, Microsoft and Google than it is to get the same level of assurance for each of the tens of thousands of possible SaaS vendors in the world.
- Regulations can penalize centralized services: One of the underappreciated consequences of Edward Snowden’s leaks, as well as an awakening to the sometimes questionable data-privacy practices of companies like Facebook, is an uptick in governments and enterprises trying to protect themselves and their citizens from prying eyes. Using applications hosted in another country or managed by a third party exposes enterprises to a litany of legal issues. The European Union’s GDPR law, for example, exposes SaaS companies to more potential liability with each piece of EU-citizen data they store, and puts enterprises on the hook for how their SaaS providers manage data.
- Data breach exposure is higher than ever: A corollary to the point above is the increased exposure to cybercrime that companies face as they build out their SaaS footprints. All it takes is one employee at a SaaS provider clicking on the wrong link or installing the wrong Chrome extension to expose that provider’s customers’ data to criminals. If the average large enterprise uses 1,000+ SaaS applications and each of those vendors averages 250 employees, that’s an additional 250,000 possible points of entry for an attacker.
- Applications are much more portable: The SaaS revolution has resulted in software vendors developing their applications to be cloud-first, but they’re now building those applications using technologies (such as containers) that can help replicate the deployment of those applications onto any infrastructure. This shift to what’s called cloud-native computing means that the same complex applications you can sign up to use in a multi-tenant cloud environment also can be deployed into a private data center or VPC much easier than previously possible. Companies like BigID, StackRox, Dashbase and others are taking a private cloud-native-instance-first approach to their application offerings. Meanwhile SaaS stalwarts like Atlassian, Box, GitHub and many others are transitioning over to Kubernetes driven, cloud-native architectures that provide this optionality in the future.
- The script got flipped on CIOs: Individuals and small teams within large companies now drive software adoption by selecting the tools (e.g. GitHub, Slack, HipChat, Dropbox), often SaaS, that best meet their needs. Once they learn what’s being used and how it’s working, CIOs are faced with the decision to either restrict network access to shadow IT or pursue an enterprise license — or the nearest thing to one — for those services. This trend has been so impactful that it spawned an entirely new category called cloud access security brokers — another vendor that needs to be paid, an additional layer of complexity, and another avenue for potential problems. Managing local versions of these applications brings control back to the CIO and CISO.
The future of software is location agnostic
As the pace of technological disruption picks up, the previous generation of SaaS companies is facing a future similar to the legacy software providers they once displaced. From mainframes up through cloud-native (and even serverless) computing, the goal for CIOs has always been to strike the right balance between cost, capabilities, control and flexibility. Cloud-native computing, which encompasses a wide variety of IT facets and often emphasizes open-source software, is poised to deliver on these benefits in a manner that can adapt to new trends as they emerge.
The problem for many of today’s largest SaaS vendors is that they were founded and scaled out during the pre-cloud-native era, meaning they’re burdened by some serious technical and cultural debt. If they fail to make the necessary transition, they’ll be disrupted by a new generation of SaaS companies (and possibly traditional software vendors) that are agnostic toward where their applications are deployed and who applies the pre-built automation that simplifies management. This next generation of vendors will put more control in the hands of end customers (who crave control), while maintaining what vendors have come to love about cloud-native development and cloud-based resources.
So, yes, Marc Benioff and Salesforce were absolutely right to champion the “No Software” movement over the past two decades, because the model of enterprise software they targeted needed to be destroyed. In the process, however, Salesforce helped spur a cloud computing movement that would eventually rewrite the rules on enterprise IT and, now, SaaS itself.Tue, 05 Jun 2018 13:11:08 +0000
Apple slapped with $6.6M fine in Australia over bricked devices
As we enter the twentieth year of Salesforce, there's an interesting opportunity to reflect back on the change that Marc Benioff created with the software-as-a-service (SaaS) model for enterprise software with his launch of Salesforce.com.
From advancements in AR to Memojis to group FaceTime, there is plenty to be excited about with iOS 12. But one of the more practical updates to Apple’s mobile operating system, coming this fall, went unmentioned during the keynote at WWDC.
According to 9to5Mac, iOS 12 will allow for two different faces to be registered to Face ID.
Up until now, Face ID has only allowed a single appearance to be registered to the iPhone X. 9to5Mac first noticed the update when combing through the iOS 12 beta, where one can find new settings for Face ID that allow users to “Set Up an Alternative Appearance.”
Here’s what the description says:
In addition to continuously learning how you look, Face ID can recognize an alternative appearance.
While that’s about as unclear as a description might be, 9to5Mac tested and confirmed the update, with the following caveat. Users who choose to register two faces to Face ID will not be able to remove that face without starting over from scratch with their own FaceID registration. In other words, if you choose to reset the alternate appearance, you’ll also have to clear out all existing data around your own face, too.
That small inconvenience aside, the ability to add a second face to Face ID makes total sense. Couples often pass their phones back and forth as a matter of practicality, and parents often let their children use their phones to play games and check out apps.
Plus, this may hint at Face ID on the next generation of iPads, which tend to be shared amongst multiple users more often than phones.Mon, 04 Jun 2018 17:46:58 +0000
After 20 years of Salesforce, what Marc Benioff got right and wrong about the cloud
From advancements in AR to Memojis to group FaceTime, there is plenty to be excited about with iOS 12. But one of the more practical updates to Apple’s mobile operating system, coming this fall, went unmentioned during the keynote at WWDC. According to 9to5Mac, iOS 12 will allow for two different faces to be registered […]
Apple’s Craig Federighi announced that Apple was doubling down on performance with the upcoming release of iOS 12 at the WWDC event in San Jose, Calif. today.
What’s more, he said, the company would be making these changes to the full range of iOS devices going back to 2013. “And so for iOS 12, we are doubling down on performance from top to bottom making improvements to make your device faster and more responsive. And because we want these changes to be available the full range of our customers, iOS 12 will be available on all the same devices as iOS 11,” Federighi told the WWDC audience.
Perhaps because customers were unhappy to learn about the battery issues with older iOS devices Federighi stressed that Apple has focussed these performance increases on older devices, giving people with older iPhones, the maximum lift. Using the iPhone 6 as an example, he gave some figures about performance increase, stressing that it was still early days. (As an iPhone 6 user, I was listening carefully.)
“Now on that device, iOS delivers a number of improvements. Across common operations you’ll see that apps launch up to 40% faster, the keyboard can come up to 50% faster and you can slide to take a photo at up to 70% faster,” he said.
But he said, the biggest focus, and one all iPhone users can appreciate, is that they are working to optimize performance when it’s under load. As Federighi said that’s when you need performance the most and where iOS 12 really shines.
“We put iOS 12 through our stress tests and we saw in those conditions share sheet coming up twice as fast, and apps launching twice as fast. These are big, big improvements,” he stressed.
Lastly, Apple also optimized iOS 12 at the chip level working with the chip team to optimize performance, while taking battery life into account. If you keep the power pedal to the metal for too long, you suck battery, but Apple is trying to find that perfect balance of power and battery life in iOS 12.
“CPUs traditionally respond to an increased demand for performance by slowly ramping up their clock speed. Well, now in iOS 12, we’re much smarter. When we detect that you need a performance lift when you’re scrolling and launching an app, we ramped up processor performance instantly to its highest state delivering high performance and a ramp it down just as fast to preserve battery life,” he said.
All of this will be available when iOS 12 is released later this year.Mon, 04 Jun 2018 16:53:21 +0000
Share on Twitter
Apple’s Craig Federighi announced that Apple was doubling down on performance with the upcoming release of iOS 12 at the WWDC event in San Jose, Calif. today. What’s more, he said, the company would be making these changes to the full range of iOS devices going back to 2013. “And so for iOS 12, we are […]
Apple is holding a keynote today at the San Jose Convention Center, and the company is expected to unveil new updates for iOS, macOS, tvOS, watchOS and maybe also some new hardware. At 10 AM PT (1 PM in New York, 6 PM in London, 7 PM in Paris), you’ll be able to watch the event as the company is streaming it live.
Apple is likely to talk about some new features for all its software platforms — WWDC is a developer conference after all. Rumor has it that Apple could also unveil some MacBook Pro update with new Intel processors.
If you have the most recent Apple TV, you can download the Apple Events app in the App Store. It lets you stream today’s event and rewatch old events. Users with old Apple TVs can simply turn on their devices. Apple is pushing out the “Apple Events” channel so that you can watch the event.
And if you don’t have an Apple TV, the company also lets you live-stream the event from the Apple Events section on its website. This video feed works in Safari and Microsoft Edge. And for the first time, Apple says that the video should also work in Google Chrome and Mozilla Firefox.
So to recap, here’s how you can watch today’s Apple event:
- Safari on the Mac or iOS.
- Microsoft Edge on Windows 10.
- Maybe Google Chrome or Mozilla Firefox.
- An Apple TV gen 4 with the Apple Events app in the App Store.
- An Apple TV gen 2 or 3, with the Apple Events channel that arrives automatically right before the event.
Of course, you also can read TechCrunch’s live blog if you’re stuck at work and really need our entertaining commentary track to help you get through your day. We have a big team in the room this year.Wed, 23 May 2018 17:18:22 +0000
iOS 12 will let users register another person to their Face ID
Apple is holding a keynote today at the San Jose Convention Center, and the company is expected to unveil new updates for iOS, macOS, tvOS, watchOS and maybe also some new hardware. At 10 AM PT (1 PM in New York, 6 PM in London, 7 PM in Paris), you’ll be able to watch the […]
If you bought a battery replacement for an out-of-warranty iPhone last year, you may be eligible for a $50 credit from Apple. The company issued a new support page post this week, announcing the rebate policy, which applies to purchases made at authorized locations.
The move is part of on-going restitution in the wake of an admission that the company was throttling processing speeds on older model phones, in order to save on battery life. Late last year, Apple apologized for not informing users about the issue, promising to be more transparent in the future.
Soon after, the company began offering $29 battery replacements — a $50 discount on out-of-warranty battery replacements. This credit covers those who purchased a battery out-of-warranty any point in 2017, leading up to that new offer.
The company has promised to send an email to all eligible users with instructions on how to get the credit transferred to their account between now and July 27. Those who don’t get a notification, but still believe themselves to be eligible, can contact Apple directly between now and the end of the year.Fri, 18 May 2018 00:30:36 +0000
With iOS 12, Apple focuses on performance
If you bought a battery replacement for an out-of-warranty iPhone last year, you may be eligible for a $50 credit from Apple. The company issued a new support page post this week, announcing the rebate policy, which applies to purchases made at authorized locations. The move is part of on-going restitution in the wake of an admission […]Steven Aquino is a freelance tech writer and iOS accessibility expert.
Following Apple’s education event in Chicago in March, I wrote about what the company’s announcements might mean for accessibility. After sitting in the audience covering the event, the big takeaway I had was Apple could “make serious inroads in furthering special education as well.” As I wrote, despite how well-designed the Classroom and Schoolwork apps seemingly are, Apple should do more to tailor their new tools to better serve students and educators in special education settings. After all, accessibility and special education are inextricably tied.
It turns out, Apple has, unsurprisingly, considered this.
“In many ways, education and accessibility beautifully overlap,” Sarah Herrlinger, Apple’s Senior Director of Global Accessibility Policy and Initiatives, said to me. “For us, the concept of differentiated learning and how the accessibility tools that we build in [to the products] help make that [learning] possible is really important to us.”
Apple’s philosophy toward accessibility and education isn’t about purposely targeting esoteric use cases such as IEP prep or specialized teaching methodologies.
In fact, Apple says there are many apps on the iOS App Store which do just that. The company instead believes special education students and teachers themselves should take the tools as they are and discover creative uses for them. Apple encourages those in schools to take the all-new, low-cost iPad and the new software and make them into the tools they need to teach and learn. It’s a sentiment that hearkens back how Steve Jobs pitched the original iPad: It’s a slab of metal and glass that can be whatever you wish it to be.
In other words, it’s Apple’s customers who put the ‘I’ in iPad.
In hindsight, Apple’s viewpoint for how they support special education makes total sense if you understand their ethos. Tim Cook often talks about building products that enrich people’s lives — in an education and accessibility context, this sentiment often becomes a literal truism. For many disabled people, iOS and the iPad is the conduit through which they access the world.
Apple ultimately owns the iPad and the message around it, but in actuality it’s the users who really transform it and give it its identity. This is ultimately what makes the tablet exceptional for learning. The device’s design is so inherently accessible that anyone, regardless of ability, can pick it up and go wild.
Apple’s education team is special
At the March event, one of the onstage presenters was Kathleen Richardson, who works at Apple on their ConnectedED program. She is one of many who work on the company’s education team, whose group is tasked with working with schools and districts in evangelizing and integrating Apple products into their curricula.
I spoke with Meg Wilson, a former special education teacher who now works on education efforts inside Apple. A former Apple Distinguished Educator, Wilson is the resident “special education guru” who provides insight into how special education programs generally run. With that knowledge, she provides guidance on how Apple products can augment the process of individualizing and differentiating educational plans for special ed students.
A focus of our discussion was the Schoolwork app and how it could be used to suit the needs of teachers and support staff. One example Wilson cited was that of a speech therapy session, where a speech pathologist could use Schoolwork not necessarily for handouts, but for monitoring students’ progress toward IEP goals. Instead of the app showing a worksheet for the student to complete, it could show a data-tracking document for the therapist, who is recording info during lessons. “What we need in special ed is data — we need data,” Wilson said. She added Schoolwork can be used to “actually see the progress” students are making right from an iPad without mountains of paper. A key element to this, according to Wilson, is Schoolwork’s ability to modernize and streamline sharing. It makes conferring with other members of the IEP team a more continuous, dynamic endeavor. Rather than everyone convening once a year for an annual review of students’ progress, Wilson said, Schoolwork allows for “an amazing opportunity for collaboration amongst service providers.”
Wilson also emphasized the overarching theme of personalizing the iPad to suit the needs of teacher and student. “When you are creative with technology, you change people’s lives,” she said.
To her, the iPad and, especially, the new software scale for different learners and different environments really well. For special educators, for instance, Wilson said it’s easy to add one’s entire caseload to Schoolwork and have progress reports at the ready anytime. Likewise, the ability in Classroom to “lock” an entire class (or a single student) into an activity on an iPad, which takes its cues from iOS’s Guided Access feature, helps teachers ensure students stay engaged and on task during class. And for students, the intuitive nature of the iPad makes it so that students can instantly share their work with teachers.
But it isn’t only Apple who is changing education. Wilson made the case repeatedly that third-party developers are also making Apple’s solutions for education more compelling. She stressed there are many apps on the App Store that can help in special education settings (IEP prep, communication boards, etc.), and that Apple hears from developers who want to learn about accessibility and, crucially, how to make their apps accessible to all by supporting the discrete Accessibility features. Wilson shared an anecdote of an eye-opening experience for one developer, who expressed the idea of supporting accessibility “didn’t even occur to him,” but doing so made his app better.
One “big idea” that struck me from meeting with Wilson was how diverse Apple’s workforce truly is. Wilson is a former special education teacher. Apple’s health and fitness team reportedly is made up of such medical professionals as doctors and nurses. Apple’s education team is no different, as my conversation with Wilson attested. It’s notable how Apple brings together so many, from all walks of life, to help inform as they build these products. It really does intersect liberal arts with technology.
Apple makes learning code accessible to all
In early March, Lori Hawkins at the Austin American-Statesman reported on how Apple has made its Everyone Can Code program accessible to all. Hawkins wrote that representatives from Apple visited Austin’s Texas School for the Blind and Visually Impaired to teach students to fly drones with code written in the Swift Playgrounds app. As you’d expect, Swift Playgrounds is fully compatible with VoiceOver and even Switch Control. “When we said everyone should be able to code, we really meant everyone,” Herrlinger told the Statesman. “Hopefully these kids will leave this session and continue coding for a long time. Maybe it can inspire where their careers can go.” Herrlinger also appeared on a panel at the SXSW festival, where she and others discussed coding and accessibility pertaining to Everyone Can Code.
For Global Accessibility Awareness Day this year, Apple has announced that a slew of special education schools are adopting Everyone Can Code into their curricula. In a press release, the company says they “collaborated with engineers, educators, and programmers from various accessibility communities to make Everyone Can Code as accessible as possible.” They also note there are “additional tools and resources” which should aid non-visual learners to better understand coding environments.
In addition to the Texas School for the Blind and Visually Impaired in Austin, Apple says there are seven other institutions across the country that are implementing the Everyone Can Code curriculum. Among them are two Bay Area schools: the Northern California campuses of the California School for the Blind and the California School for the Deaf, both located in Fremont.
At a special kick-off event at CSD, students were visited by Apple employees — which included CEO Tim Cook — who came to the school to officially announce CSB and CSD’s participation in the Everyone Can Code program.
Students arrived at the school’s media lab for what they believed to be simply another day of coding. In reality, they were in for a surprise as Tim Cook made his appearance. Members of Apple’s Accessibility team walked students through controlling drones and robots in Swift Playgrounds on an iPad. Cook — along with deaf activist and actor Nyle DiMarco — toured the room to visit with students and have them show off their work.
In an address to students, Cook said, “We are so happy to be here to kick off the Everyone Can Code curriculum with you. We believe accessibility is a fundamental human right and coding is part of that.”
In an interview Cook told me, “Accessibility has been a priority at Apple for a long time.” He continued: “We believe in focusing on ability rather than disability. We believe coding is a language — a language that should be accessible to everyone.” When I asked about any accessibility features he personally uses, Cook said due to hearing issues he likes to use closed-captioning whenever possible. And because he wears glasses, he likes to enlarge text on all of his devices, particularly the iPhone.
Accessibility-related Apple retail events
As in prior years, Apple is spending the month of May promoting accessibility and Global Accessibility Awareness Day by hosting numerous accessibility-centric events at its retail stores across the globe. (These are done throughout the year too.) These include workshops on the accessibility features across all Apple’s platforms, as well as talks and more. Apple says they have held “over 10,000 accessibility sessions” since 2017.
Today, on Global Accessibility Awareness Day 2018, Apple is holding accessibility-related events at several campuses worldwide, including its corporate headquarters in Cupertino, as well as at its satellite campuses in Austin, Cork and London.Thu, 10 May 2018 21:14:32 +0000
How to watch the live stream for today’s Apple WWDC keynote
Following Apple’s education event in Chicago in March, I wrote about what the company’s announcements might mean for accessibility. After sitting in the audience covering the event, the big takeaway I had was Apple could “make serious inroads in furthering *special* education as well.” As I wrote, despite how well-designed the Classroom and Schoolwork apps seemingly are, Apple should do more to tailor their new tools to better serve students and educators in special education settings. After all, accessibility and special education are inextricably tied. It turns out, Apple has, unsurprisingly, considered this.
Note: This is the final article in a three-part series on valuation thoughts for common sectors of venture-capital investment. The first article, which attempts to make sense of the SaaS revenue multiple, can be found here; the second, on public marketplaces can be found here.
Over the past year, the VC-backed hardware category got a big boost — Roku was the best-performing tech IPO of 2017 and Ring was acquired by Amazon for a price rumored to exceed $1 billion. In addition to selling into large, strategic markets, both companies have excellent business models. Ring sells a high-margin subscription across a high percentage of its customer base and Roku successfully monetizes its 19 million users through ads and licensing fees.
In the context of these splashy exits, it is interesting to consider the key factors that have made for valuable hardware companies against a backdrop of an investment sector that has often been maligned through the years, as I’m sure we’ve all heard the trope that “hardware is hard.” Despite this perception, hardware investment has grown much faster than the overall VC market since 2010, as shown below.
A large part of this investment growth has to do with the fact that we’ve seen larger exits in hardware over the past few years than ever before. Starting with Dropcam’s* $555 million acquisition in 2014, we’ve seen a number of impressive outcomes in the category, from large acquisitions like Oculus ($2 billion), Beats ($3 billion) and Nest ($3.2 billion) to IPOs like GoPro ($1.2 billion), Fitbit ($3 billion) and Roku* ($1.3 billion)**. Unfortunately for the sector, a few of these companies have underperformed since exit; notably, GoPro and Fitbit have both cratered in the public markets.
As of April 3, 2018, both stocks traded at less than 1x trailing revenue, a far cry from the multiples of forward revenue given to other tech companies. Roku, on the other hand, continues to perform as a stock market darling, trading at approximately 6x trailing revenue and a market cap of $3.1 billion. What sets them so far apart?
The simple answer is their business model — Roku generates a significant amount of high gross margin platform revenue, while GoPro and Fitbit are reliant on continued hardware sales to drive future business, a revenue stream that has been stagnant to declining. However, Roku’s platform is one successful hardware business model; in this article I’ll explore four others — Attach, Replacement, Razor and Blades and Chunk.
“Attaching” a high gross margin annuity stream from a subscription to a hardware sale is a goal for many hardware startups. However, this is often easier said than done — as it’s critical to nail the alignment of the subscription service to the core value proposition of the hardware.
For example, Fitbit rolled out coaching, but people buy Fitbit to track activity and sleep — and this mismatch resulted in a low attach rate. On the other hand, Ring’s subscription allows users to view past doorbell activity, which aligns perfectly with customers looking to improve home security. Similarly, Dropcam sold a subscription for video storage, and at an approximate 40 percent attach rate created a strong economic model. Generally, we’ve found that the attach rate necessary to create a viable business should be at least in the 15-20 percent range.
Unlike the “Attach” business model that sells services directly related to improving the core functionality of the hardware device, “Platform” business models create ancillary revenue streams that materialize when users regularly engage with their hardware. I consider Roku or Apple to be in this category; by having us glued to our smartphones or TV screens, these companies earn the privilege of monetizing an app store or serving us targeted advertisements. Here, the revenue stream is not tied directly to the initial sale, and can conceivably scale well beyond the hardware margin that is generated.
In fact, AWS is one of the more successful recent examples of a hardware platform — by originally farming out the capacity from existing servers in use by the company, Amazon has generated an enormously profitable business, with more than $5 billion in quarterly revenue.
Despite the amazing economics of Apple’s App Store, as of the company’s latest quarterly earnings report, less than 10 percent of their nearly $80 billion in quarterly revenue came from the “Services” category, which includes their digital content and services such as the App Store.
What really drives value to Apple is the replacement rate of their core money-maker — the iPhone. With the average consumer upgrading their iPhone every two to three years, Apple creates a massive recurring revenue stream that continues to compound with growth in the install base. Contrast this with GoPro, where part of the reason for its poor market performance has been its inability to get customers to buy a new camera — once you have a camera that works “well enough” there is little incentive to come back for more.
Razor and Blades
The best example of this is Dollar Shave Club, which quite literally sold razors and blades on its way to a $1 billion acquisition by Unilever. This business model usually involves a low or zero gross margin sale on the initial “Razor” followed by a long-term recurring subscription of “Blades,” without which the original hardware product wouldn’t work. Recent venture examples include categories like 3D printers, but this model isn’t anything new — think of your coffee machine!
Is it still possible to build a large hardware business if you don’t have any of the recurring revenue models mentioned above? Yes — just try to make thousands of dollars in gross profit every time you sell something — like Tesla does. At 23 percent gross margin and an average selling price in the $100,000 range, you’d need more than a lifetime of iPhones to even approach one car’s worth of margin!
So, while I don’t think anyone would disagree that building a successful hardware business has quite literally many more moving parts than software, it’s interesting to consider the nuances of different hardware business models.
While it’s clear that in most cases, recurring revenue is king, it’s difficult to say that any of these models are intrinsically more superior, as large businesses have been built in each of the five categories covered above. However, if forced to choose, a “Platform” model seems to offer the most unbounded upside as it’s indicative of a higher engagement product and isn’t indexed to the original value of the product (some people certainly spend more on the App Store than on the iPhone purchase).
While it’s easy to take a narrow view of VC-hardware investing based on the outcome of a few splashy tech gadgets, broadening our aperture just a bit shows us that large hardware businesses have been built across a variety of industries and business models, and many more successes are yet to come.
*Indicates a Menlo Ventures investment
**Initial value at IPOThu, 10 May 2018 19:30:45 +0000
Apple offers a $50 credit for some out-of-warranty iPhone battery purchases
I don’t think anyone would disagree that building a successful hardware business has quite literally many more moving parts than software.
To subscribe for the service, users simply download the Visible app (currently available only on iOS) and register. Right now, subscriptions are invitation only and would-be subscribers have to get an invitation from someone who’s already a current Visible member.
Once registration is complete, Visible will send a sim card the next day, and, once installed, a user can access Verizon’s 4G LTE network to stream videos, send texts, and make calls as much as their heart desires.
Visible says there’s no throttling at the end of the month and subscribers can pay using internet-based payment services like PayPal and Venmo (which is owned by PayPal).
The service is only available on unlocked devices — and right now, pretty much only to iPhone users.
“This is something that’s been the seed of an idea for a year or so,” says Minjae Ormes, head of marketing at Visible. “There’s a core group of people from the strategy side. There’s a core group of five or ten people who came up with the idea.”
The company wouldn’t say how much Verizon gave to the business to get it off the ground, but the leadership team is comprised mostly of former employees, like Miguel Quiroga the company’s chief executive.
“The way I would think about it.. we are a phone service in the platform that enables everything that you do. The way we launched and the app messaging piece of it. You do everything else on your phone and a lot of time if you ask people your phone is your life,” said Ormes. The thinking was, “let’s give you a phone that you can activate right from your phone and get ready to go and see how it resonates.”
It’s an interesting move from our corporate overlord (Verizon owns Oath, which owns TechCrunch), which is already the top dog in wireless services, with some 150 million subscribers compared with AT&T’s 141.6 million and a soon-to-be-combined Sprint and T-Mobile subscriber base of 126.2 million.
For Verizon, the new company is likely about holding off attrition. The company shed 24,000 postpaid phone connections in the last quarter, according to The Wall Street Journal, which put some pressure on its customer base (but not really all that much).
Mobile telecommunications remain at the core of Verizon’s business plans for the future, even as other carriers like AT&T look to dive deeper into content (while Go90 has been a flop, Verizon hasn’t given up on content plans entirely). The acquisition of Oath added about $1.2 billion in brand revenue (?) to Verizon for the last quarter, but it’s not anywhere near the kind of media juggernaut that AT&T would get through the TimeWarner acquisition.
Verizon seems to be looking to its other mobile services, through connected devices, industrial equipment, autonomous vehicles, and the development of its 5G network for future growth.
Every wireless carrier is pushing hard to develop 5G technologies, which should see nationwide rollout by the end of this year. Verizon recently completed its 11 city trial-run and is banking on expansion of the network’s capabilities to drive new services.
As the Motely Fool noted, all of this comes as Verizon adds new networking capabilities for industrial and commercial applications through its Verizon Connect division — formed in part from the $2.4 billion acquisition of Fleetmatics, that Verizon bought in 2016 along with Telogis, Sensity Systems, and LQD Wifi to beef up its mobile device connectivity services.
Meanwhile, upstart entrants to challenge big wireless carriers are coming from all quarters. In 2015, Google launched its own wireless service, Project Fi, to compete with traditional carriers and Business Insider just covered another would-be wireless warrior, Wing.
Founded by the team that created the media site Elite Daily, Wing uses Sprint cell-phone towers to deliver its service.
David Arabov and co-founder Jonathan Francis didn’t take long after taking a $26 million payout for their previous business before getting right back into the startup fray. Unlike Visible, Wing isn’t a one-size-fits-all plan and it’s a much more traditional MVNO. The company has a range of plans starting at $17 for a flip-phone and increasing to an unlimited plan at $27 per month, according to the company’s website.
As carriers continue to face complaints over service fees, locked in contracts, and terrible options, new options are bound to emerge. In this instance, it looks like Verizon is trying to make itself into one of those carriers.
For Apple, this year’s Global Accessibility Awareness Day is all about education
Earlier this year, Verizon quietly launched a new startup called Visible, offering unlimited data, minutes, and messaging services for the low, low price of $40. To subscribe for the service, users simply download the Visible app (currently available only on iOS) and register. Right now, subscriptions are invitation only and would-be subscribers have to get an […]
Share on Twitter
The many twists and turns of hardware
Verizon stealthily launched a startup offering $40-per-month unlimited data, messaging and minutes
view page: 220
Zanflare F3 High Quality Waterproof Flashlight $9.99(57% off)+Free shipping
Start: 24 Oct 2017 | End: 31 Mar 2018
Shop now for your favorite Bali Bras at Freshpair.com! Free Shipping on all US orders.
Start: 09 Aug 2017 | End: 01 May 2018