On October 18 — just one week away — some of the most brilliant and innovative minds in reality creation will gather at UCLA’s Royce Hall in Los Angeles to attend TC Sessions AR/VR 2018. Whether you’re an early start-up founder, an investor, a developer or a student, if you’re focused on AR/VR, you don’t […]
On October 18 — just one week away — some of the most brilliant and innovative minds in reality creation will gather at UCLA’s Royce Hall in Los Angeles to attend TC Sessions AR/VR 2018. Whether you’re an early start-up founder, an investor, a developer or a student, if you’re focused on AR/VR, you don’t want to miss this day-long intensive that goes deep into the current and future state of augmented and virtual realities.
Need a bit more convincing? Here are four reasons why you should buy a ticket and attend TC Sessions AR/VR 2018.
1. Deep-dive discussions
We have an outstanding roster of speakers ready to take the stage and go deep on both the opportunities and the challenges facing the AR/VR industry now and in the future. Here are just some of the people and topics we have on tap.
Niko Bonatsos, managing director at General Catalyst, Jacob Mullins, a partner at Shasta Ventures and Catherine Ulrich, managing director at FirstMark Capital will offer a reality check on the state of AR/VR funding — and discuss where the opportunities lie.
Survios co-founders Nathan Burba and James Illiff will talk VR gaming. The big question is whether VR gaming will continue to be a big opportunity and whether the studio can keep the momentum rolling.
Stephanie Zhan, a partner at Sequoia Capital, discusses how to build an inclusive — if virtual — future. As we spend more time in online virtual worlds, can the game developers who build them address the social issues we encounter?
2. Presentations: The challenging future of AR/VR
From expensive hardware to breaking out beyond gaming, AR/VR technology faces hurdles to widespread adoption. Heavy-hitters at Oculus, Facebook, and Snap (to name a few) weigh in on this important subject. Here’s a taste.
Finding users isn’t the only hurdle when it comes to augmented reality. Creating developer platforms ranks right up there on the AR challenge-o-meter. Eitan Pilipski, a VP at Snap, will talk about leveraging the company’s extensive AR selfie-filter expertise to attract more developers.
Yelena Rachitzky is an executive producer of experiences at Oculus, a company that’s invested hundreds of millions of dollars into VR content. She’ll discuss how the company plans to help Facebook kickstart its VR future. Will Facebook’s customers buy in?
Speaking of Facebook’s future, Ficus Kirkpatrick leads the company’s camera team, and he’ll talk about the company’s entry into AR — by augmenting customers’ smartphone cameras. But where will Facebook’s AR journey lead?
You won’t find a better opportunity to connect with the leaders, innovators, investors and makers within the AR/VR community. Whether you’re looking for collaborators, an investment opportunity, your next job or your next round of funding, you’ll find the people who can make it happen at TC Sessions AR/VR 2018 — all in one day, all in one place.
4. Build community
Community building goes beyond simple networking. It’s like-minded people sharing their ideas, philosophies and dreams. It’s about learning from each other and then returning to the work with renewed inspiration. Come and enrich the community.
TC Sessions AR/VR 2018 takes place on October 18 at UCLA’s Royce Hall in Los Angeles. Tickets cost $149, but you can save 35 percent simply by tweeting your attendance. Go buy a ticket and join your people for one incredible, inspiring day. We can’t wait to see you next week!
For a lot of consumers, Pokemon Go wasn’t their first exposure to augmented reality, it was the dog selfie lens inside Snapchat. In the past few years, consumer use hasn’t evolved too heavily when it comes to what people are actually using AR for even though technical capabilities have taken some giant leaps. Snap was an […]
For a lot of consumers, Pokemon Go wasn’t their first exposure to augmented reality, it was the dog selfie lens inside Snapchat.
In the past few years, consumer use hasn’t evolved too heavily when it comes to what people are actually using AR for even though technical capabilities have taken some giant leaps. Snap was an early leader but now the industry is much more crowded with Apple, Google, Facebook and others all staffing up extensive teams focused on smartphone-based AR capabilities.
At our one-day TC Sessions: AR/VR event in LA on October 18, we’ll be chatting with Eitan Pilipski, the VP of Snap’s Camera Platform, a role that would seem to be pretty central to the long-term vision of a company that has long referred to itself as “a camera company.”
Snap has been throwing some updates to their developer tools as of late especially for their Lens Studio product which gives developers access to tools to create AR masks and experiences. There’s a lot of room to grow, and it will be interesting to see how much depth Snap can pull from these short experiences and whether it sees “lenses” evolving to bring users more straight-forward utility in the near-term.
The company hasn’t had the easiest bout as a public company lately, but it’s clear that it sees computer vision and augmented reality as key parts of the larger vision it hopes to achieve. At our LA event we’ll look to dive deeper into how they’re approaching these technologies and what it can bring consumers beyond a little added enjoyment.
As a special offer to TechCrunch readers, save 35% on $149 General Admission tickets when you use this link or code TCFAN. Student tickets are just $45 and can be booked here.
Tim Merel Contributor Tim Merel is managing director of Digi-Capital. More posts by this contributor The Reality Ecosystem: What AR/VR/XR needs to go big China could beat America in AR/VR long-term Last year 30 leading venture investors told us about a fundamental shift from early stage North American VR investment to later stage Chinese computer vision/AR […]
Digi-Capital’s AR/VR/XR Analytics Platform showed Chinese investments into computer vision and augmented reality technologies surging to $3.9 billion in the last 12 months, while North American augmented and virtual reality investment fell from nearly $1.5 billion in the fourth quarter of 2017 to less than $120 million in the third quarter of 2018. At the same time, VC sentiment on virtual reality softened significantly.
What a difference a year makes.
What VCs said a year ago
When we spoke to venture capitalists least year, they had some pretty strong opinions.
Mobile augmented reality and Computer Vision/Machine Learning (“CV/ML”) are at opposite ends of the spectrum — one delivering new user experiences and user interfaces and the other powering a broad range of new applications (not just mobile augmented reality).
The market for mobile AR is very early stage, and could see $50 to $100 million exits in 2018/2019. Dominant companies will take time to emerge, and it will also take time for developers to learn what works and for consumers and businesses to adopt mobile AR at scale (note: Digi-Capital’s base case is mobile AR revenue won’t really take off until 2019, despite 900 million installed base by Q4 2018). Tech investors are most interested in native mobile AR with critical use cases, not ports from other platforms.
Computer vision and visual machine learning is more advanced than mobile AR, and could see dominant companies in the near-term. Here, investors love startups with real-world solutions that are challenging established industries and business practices, not research projects. Firms are investing in more than 20 different mobile augmented reality and computer vision and visual machine learning sectors, but there is the potential for overfunding during the earliest stages of the market.
What VCs did in the last 12 months
Perhaps the most crucial observation is the declining deal volumes over the last year.
Deal volume (the number of deals) declined steadily by 10% per quarter over the last 12 months, and was around two-thirds the level in Q3 2018 that it was in Q4 2017. Most of the decline happened in the US and Europe, where VCs increasingly stayed on the sidelines by looking for short-term traction as a sign of long-term growth. (Note: data normalized excluding HTC ViveX accelerator Q4 2017, which skews the data)
Deal Volume (number of deals by stage)
The biggest casualties of this short-termist approach have been early stage startups raising seed (deal volume down by more than half) and some series A (deal volume down by a quarter) rounds. This trend has been strongest in North America and Europe, but even Asia has not been entirely immune from some early stage deal volume decline.
While deal volume is a great indicator of early-stage investment market trends, deal value (dollars invested) gives a clearer picture of where the big money has been going over the last 12 months. (Note: investment means new VC money into startups, not internal corporate investment – which is a cost). Global investment hit its previous quarterly record over $2 billion in Q4 2017, driven by a few very large deals. It then dropped back to around $1 billion in the first quarter of this year. Since then deal value has steadily climbed quarter-on-quarter, to reach a new record high well over $2 billion in Q3 2018.
Over $4 billion of the total $7.2 billion in the last 12 months was invested in computer vision/AR tech, with well over $1 billion going into smartglasses (the bulk of that into Magic Leap) . The next largest sectors were games around $400 million and advertising/marketing at a quarter of a billion dollars. The remaining 22 industry sectors raised in the low hundreds of millions of dollars down to single digit millions in the last 12 months.
A tale of two markets
Deals by Country and Category (dollars)
American and Chinese investment had an inverse relationship in the last 12 months. American investors increasingly chose to stay on the sidelines, while Chinese investor confidence grew to back up clear vision with long-term investments. The differences in the data couldn’t be more stark.
North American Deals (dollars)
North American investment was almost triple Asian investment in Q4 2017, with a record high of nearly $1.5 billion dollars for the quarter. Despite 2018 being a transitional year for the market (Digi-Capital forecast that market revenue was unlikely to accelerate until 2019), North American quarterly investment fell over 90% to less than $120 million in Q3 2018. American VCs appear to have taken a long-term solution to a short-term problem.
China Deals (dollars)
Meanwhile, Chinese VCs have been focused on the long-term potential of the intersection between computer vision and augmented reality, with later-stage Series C and Series D rounds raising hundreds of millions of dollars a time. This trend increased dramatically in the last 12 months, with SenseTime Group raising over $2 billion in multiple rounds and Megvii close behind at over $1 billion (also multiple rounds).
Smaller investments (by Chinese standards) in the hundreds of millions have gone into companies Westerners might not know, including Beijing Moviebook Technology, Kujiale and more. All this saw Chinese quarterly investment grow 3x in the last 12 months. (Note: some recent Western opinions about market investment trends were based on incomplete data)
Where to from here?
With our team’s investment banking background, experience shows that forecasting venture capital investment is a fool’s errand. Yet it is equally foolish to ignore hard data, and ongoing discussions with leading investors along Sand Hill Road and China indicate some trends to watch.
American tech investors might continue to wait for market traction before providing the fuel needed for that traction (even if that seems counterintuitive). While this could pose an existential threat to some early stage startups in North America, it’s also an opportunity for smart money with longer time horizons.
Conversely, Chinese VCs continue to back domestic companies which could dominate the future of computer vision/augmented reality. The next 6 months will determine if this is a long-term trend, but it is the current mental model.
If mobile AR revenue accelerates in 2019 as critical use cases and apps emerge (as in Digi-Capital’s base case), this could become a catalyst for renewed investment by American VCs. The big unknown is whether Apple enters the smartphone tethered smartglasses market in late 2020 (as Digi-Capital has forecast for the last few years). This could be the tipping point for the market as a whole (not just investment). However, Apple timing is hard to predict (because Apple), with any potential launch date known only to Tim Cook and his immediate circle.
Steve Jobs said, “You can’t connect the dots looking forward; you can only connect them looking backwards. So you have to trust that the dots will somehow connect in your future. You have to trust in something – your gut, destiny, life, karma, whatever. This approach has never let me down, and it has made all the difference in my life.”
Chinese investors embraced a Jobsian approach over the last 12 months, with Western VCs increasingly dot-connecting (or not). It will be interesting to see how this plays out for computer vision/AR investment over the next 12 months, so watch this space.
Augmented reality has the potential to change how we interact with the internet, as these technologies scale you can certainly bet that Facebook is going to be looking to shape what’s possible. At our one-day TC Sessions: AR/VR event in LA next month, we’ll be joined by Ficus Kirkpatrick, Facebook’s Head of Camera AR Platform, […]
Augmented reality has the potential to change how we interact with the internet, as these technologies scale you can certainly bet that Facebook is going to be looking to shape what’s possible.
At our one-day TC Sessions: AR/VR event in LA next month, we’ll be joined by Ficus Kirkpatrick, Facebook’s Head of Camera AR Platform, to chat about the company’s strategies in 2018 and beyond for augmented reality.
While the bulk of Facebook’s VR ambitions have taken up residence under the Oculus name, the biggest AR platform available right now are the hundreds of millions of smartphones that people already have. Fortunately, Facebook has quite the presence on mobile but that’s made it even more of a challenge to fit AR ambitions into apps that already have so much going on.
Facebook is not the place most people turn to when they want to take a photo, but the company’s Camera team is hoping to change that by bringing augmented reality face and environment filters deeper into the app.
The Camera Effects AR Platform was Mark Zuckerberg’s hallmark announcement at F8 in 2017, a year when Apple and Google also started getting more verbose in their praise for AR’s potential. In 2018, the company has had some other things keeping it busy, but has continued to bring AR to other areas of the company’s suite of apps with new capabilities.
Right now Facebook is largely focused on the fun and artsy applications of AR, but where will the company take smartphone AR beyond selfie filters towards delivering utility to billions of users? We look forward to chatting with Kirkpatrick about the challenges ahead for the tech giant and the strategies for getting more users to warm up to AR.
$99 Early Bird sale ends tomorrow, 9/21, book your tickets today and save $100 before prices go up, and save an additional 25% when you tweet your attendance through our ticketing platform.
Student tickets are just $45 and can be purchased here. Student tickets are good for high/middle school students (with chaperon), 2/4 year college students, and master’s/PhD students.
The iPhone XS proves one thing definitively: that the iPhone X was probably one of the most ambitious product bets of all time. When Apple told me in 2017 that they put aside plans for the iterative upgrade that they were going to ship and went all in on the iPhone X because they thought […]
The iPhone XS proves one thing definitively: that the iPhone X was probably one of the most ambitious product bets of all time.
When Apple told me in 2017 that they put aside plans for the iterative upgrade that they were going to ship and went all in on the iPhone X because they thought they could jump ahead a year, they were not blustering. That the iPhone XS feels, at least on the surface, like one of Apple’s most “S” models ever is a testament to how aggressive the iPhone X timeline was.
I think there will be plenty of people who will see this as a weakness of the iPhone XS, and I can understand their point of view. There are about a half-dozen definitive improvements in the XS over the iPhone X, but none of them has quite the buzzword-worthy effectiveness of a marquee upgrade like 64-bit, 3D Touch or wireless charging — all benefits delivered in previous “S” years.
That weakness, however, is only really present if you view it through the eyes of the year-over-year upgrader. As an upgrade over an iPhone X, I’d say you’re going to have to love what they’ve done with the camera to want to make the jump. As a move from any other device, it’s a huge win and you’re going head-first into sculpted OLED screens, face recognition and super durable gesture-first interfaces and a bunch of other genre-defining moves that Apple made in 2017, thinking about 2030, while you were sitting back there in 2016.
Since I do not have an iPhone XR, I can’t really make a call for you on that comparison, but from what I saw at the event and from what I know about the tech in the iPhone XS and XS Max from using them over the past week, I have some basic theories about how it will stack up.
For those with interest in the edge of the envelope, however, there is a lot to absorb in these two new phones, separated only by size. Once you begin to unpack the technological advancements behind each of the upgrades in the XS, you begin to understand the real competitive edge and competence of Apple’s silicon team, and how well they listen to what the software side needs now and in the future.
Whether that makes any difference for you day to day is another question, one that, as I mentioned above, really lands on how much you like the camera.
But first, let’s walk through some other interesting new stuff.
Notes on durability
As is always true with my testing methodology, I treat this as anyone would who got a new iPhone and loaded an iCloud backup onto it. Plenty of other sites will do clean room testing if you like comparison porn, but I really don’t think that does most folks much good. By and large most people aren’t making choices between ecosystems based on one spec or another. Instead, I try to take them along on prototypical daily carries, whether to work for TechCrunch, on vacation or doing family stuff. A foot injury precluded any theme parks this year (plus, I don’t like to be predictable) so I did some office work, road travel in the center of California and some family outings to the park and zoo. A mix of uses cases that involves CarPlay, navigation, photos and general use in a suburban environment.
In terms of testing locale, Fresno may not be the most metropolitan city, but it’s got some interesting conditions that set it apart from the cities where most of the iPhones are going to end up being tested. Network conditions are pretty adverse in a lot of places, for one. There’s a lot of farmland and undeveloped acreage and not all of it is covered well by wireless carriers. Then there’s the heat. Most of the year it’s above 90 degrees Fahrenheit and a good chunk of that is spent above 100. That means that batteries take an absolute beating here and often perform worse than other, more temperate, places like San Francisco. I think that’s true of a lot of places where iPhones get used, but not so much the places where they get reviewed.
That said, battery life has been hard to judge. In my rundown tests, the iPhone XS Max clearly went beast mode, outlasting my iPhone X and iPhone XS. Between those two, though, it was tougher to tell. I try to wait until the end of the period I have to test the phones to do battery stuff so that background indexing doesn’t affect the numbers. In my ‘real world’ testing in the 90+ degree heat around here, iPhone XS did best my iPhone X by a few percentage points, which is what Apple does claim, but my X is also a year old. I didn’t fail to get through a pretty intense day of testing with the XS once though.
In terms of storage I’m tapping at the door of 256GB, so the addition of 512GB option is really nice. As always, the easiest way to determine what size you should buy is to check your existing free space. If you’re using around 50% of what your phone currently has, buy the same size. If you’re using more, consider upgrading because these phones are only getting faster at taking better pictures and video and that will eat up more space.
The review units I was given both had the new gold finish. As I mentioned on the day, this is a much deeper, brassier gold than the Apple Watch Edition. It’s less ‘pawn shop gold’ and more ‘this is very expensive’ gold. I like it a lot, though it is hard to photograph accurately — if you’re skeptical, try to see it in person. It has a touch of pink added in, especially as you look at the back glass along with the metal bands around the edges. The back glass has a pearlescent look now as well, and we were told that this is a new formulation that Apple created specifically with Corning. Apple says that this is the most durable glass ever in a smartphone.
My current iPhone has held up to multiple falls over 3 feet over the past year, one of which resulted in a broken screen and replacement under warranty. Doubtless multiple YouTubers will be hitting this thing with hammers and dropping it from buildings in beautiful Phantom Flex slo-mo soon enough. I didn’t test it. One thing I am interested in seeing develop, however, is how the glass holds up to fine abrasions and scratches over time.
My iPhone X is riddled with scratches both front and back, something having to do with the glass formulation being harder, but more brittle. Less likely to break on impact but more prone to abrasion. I’m a dedicated no-caser, which is why my phone looks like it does, but there’s no way for me to tell how the iPhone XS and XS Max will hold up without giving them more time on the clock. So I’ll return to this in a few weeks.
Both the gold and space grey iPhones XS have been subjected to a coating process called physical vapor deposition or PVD. Basically metal particles get vaporized and bonded to the surface to coat and color the band. PVD is a process, not a material, so I’m not sure what they’re actually coating these with, but one suggestion has been Titanium Nitride. I don’t mind the weathering that has happened on my iPhone X band, but I think it would look a lot worse on the gold, so I’m hoping that this process (which is known to be incredibly durable and used in machine tooling) will improve the durability of the band. That said, I know most people are not no-casers like me so it’s likely a moot point.
Now let’s get to the nut of it: the camera.
Bokeh let’s do it
I’m (still) not going to be comparing the iPhone XS to an interchangeable lens camera because portrait mode is not a replacement for those, it’s about pulling them out less. That said, this is closest its ever been.
One of the major hurdles that smartphone cameras have had to overcome in their comparisons to cameras with beautiful glass attached is their inherent depth of focus. Without getting too into the weeds (feel free to read this for more), because they’re so small, smartphone cameras produce an incredibly compressed image that makes everything sharp. This doesn’t feel like a portrait or well composed shot from a larger camera because it doesn’t produce background blur. That blur was added a couple of years ago with Apple’s portrait mode and has been duplicated since by every manufacturer that matters — to varying levels of success or failure.
By and large, most manufacturers do it in software. They figure out what the subject probably is, use image recognition to see the eyes/nose/mouth triangle is, build a quick matte and blur everything else. Apple does more by adding the parallax of two lenses OR the IR projector of the TrueDepth array that enables Face ID to gather a 9-layer depth map.
As a note, the iPhone XR works differently, and with less tools, to enable portrait mode. Because it only has one lens it uses focus pixels and segmentation masking to ‘fake’ the parallax of two lenses.
With the iPhone XS, Apple is continuing to push ahead with the complexity of its modeling for the portrait mode. The relatively straightforward disc blur of the past is being replaced by a true bokeh effect.
Background blur in an image is related directly to lens compression, subject-to-camera distance and aperture. Bokeh is the character of that blur. It’s more than just ‘how blurry’, it’s the shapes produced from light sources, the way they change throughout the frame from center to edges, how they diffuse color and how they interact with the sharp portions of the image.
Bokeh is to blur what seasoning is to a good meal. Unless you’re the chef, you probably don’t care what they did you just care that it tastes great.
Well, Apple chef-ed it the hell up with this. Unwilling to settle for a templatized bokeh that felt good and leave it that, the camera team went the extra mile and created an algorithmic model that contains virtual ‘characteristics’ of the iPhone XS’s lens. Just as a photographer might pick one lens or another for a particular effect, the camera team built out the bokeh model after testing a multitude of lenses from all of the classic camera systems.
I keep saying model because it’s important to emphasize that this is a living construct. The blur you get will look different from image to image, at different distances and in different lighting conditions, but it will stay true to the nature of the virtual lens. Apple’s bokeh has a medium-sized penumbra, spreading out light sources but not blowing them out. It maintains color nicely, making sure that the quality of light isn’t obscured like it is with so many other portrait applications in other phones that just pick a spot and create a circle of standard gaussian or disc blur.
Check out these two images, for instance. Note that when the light is circular, it retains its shape, as does the rectangular light. It is softened and blurred, as it would when diffusing through the widened aperture of a regular lens. The same goes with other shapes in reflected light scenarios.
Now here’s the same shot from an iPhone X, note the indiscriminate blur of the light. This modeling effort is why I’m glad that the adjustment slider proudly carries f-stop or aperture measurements. This is what this image would look like at a given aperture, rather than a 0-100 scale. It’s very well done and, because it’s modeled, it can be improved over time. My hope is that eventually, developers will be able to plug in their own numbers to “add lenses” to a user’s kit.
And an adjustable depth of focus isn’t just good for blurring, it’s also good for un-blurring. This portrait mode selfie placed my son in the blurry zone because it focused on my face. Sure, I could turn the portrait mode off on an iPhone X and get everything sharp, but now I can choose to “add” him to the in-focus area while still leaving the background blurry. Super cool feature I think is going to get a lot of use.
It’s also great for removing unwanted people or things from the background by cranking up the blur.
And yes, it works on non humans.
If you end up with an iPhone XS, I’d play with the feature a bunch to get used to what a super wide aperture lens feels like. When its open all the way to f1.4 (not the actual widest aperture of the lens btw, this is the virtual model we’re controlling) pretty much only the eyes should be in focus. Ears, shoulders, maybe even nose could be out of the focus area. It takes some getting used to but can produce dramatic results.
Developers do have access to one new feature though, the segmentation mask. This is a more precise mask that aids in edge detailing, improving hair and fine line detail around the edges of a portrait subject. In my testing it has led to better handling of these transition areas and less clumsiness. It’s still not perfect, but it’s better. And third-party apps like Halide are already utilizing it. Halide’s co-creator, Sebastiaan de With, says they’re already seeing improvements in Halide with the segmentation map.
“Segmentation is the ability to classify sets of pixels into different categories,” says de With. “This is different than a “Hot dog, not a hot dog” problem, which just tells you whether a hot dog exists anywhere in the image. With segmentation, the goal is drawing an outline over just the hot dog. It’s an important topic with self driving cars, because it isn’t enough to tell you there’s a person somewhere in the image. It needs to know that person is directly in front of you. On devices that support it, we use PEM as the authority for what should stay in focus. We still use the classic method on old devices (anything earlier than iPhone 8), but the quality