share
Publication
Stay current on xr news around the world.
Web XR
Read
Web XR is a means of displaying immersive content through a web browser. This drastically expands the pool of potential users. Instead of needing a high-powered super computer and access to a VR app store, the common user can now access XR content through a regular laptop, phone, or tablet. Web XR bridges a massive accessibility gap. The tradeoff is image quality. Most computers do not have the processing power to display high quality VR content.
Getting more technical
Let’s take a look under the hood to understand why Web XR yields lower image quality than a game engine. In order to program physical properties into a Web XR environment, the development team must work with a Java Script framework called THREE.js. This is a bridge to access certain C++ functions (the primary language used inside game engines). Crossing this bridge requires passing through a “sandbox”, which saps speed and reduces performance.
If maximum accessibility is the most important aspect of your XR build, we will recommend building with Web XR. If image quality is most important, we advise building in a game engine.
XR Coding Languages
Read
Python
Python is a succinct, object oriented programming language. It’s scripting capabilities allow programmers to design visual assets by writing lines of code into a game engine like Blender. Not only can we generate shapes with python but we can also manipulate, scale, and put them into motion.
This is the default language of the machine learning world. While machine learning toolboxes are written in C++ (for performance benefits), we often use python as the intermediary tool to access these stores. In this way, it functions as a librarian who retrieves a book from the stacks and delivers it to the reader. Combining these languages blends the succinctness of python with the high performance of C++.
In the VR world, python is a tool that provides access and adds efficiency to game engine development.
C-Sharp
This is a Microsoft language. Among its most beneficial characteristics is its portability. Regardless of whether the programmer is using Linux, Mac, or Windows, the C# syntax remains consistent. This is also an oriented programming language that’s compatible with a .net backend. If the goal of your XR build is to integrate with your existing internal software solutions hosted on a .net, C# may be the most appropriate language for this work.
Object Orientation: This is one of the programming languages compatible with Unity (the most commonly used VR game engine).
It’s an object-oriented programming language, making it a valuable resource for building interactive VR experiences. Historically, programming has been understood as a logical procedure for performing an action – taking input data, processing it, and producing an output.
At its origin, the essence of programming has been writing the logic rather than defining the data. By contrast, object oriented programming is rooted in the belief that what we really care about are the objects we want to manipulate rather than the logic required to manipulate them. An “object” could be anything from a human being to a building to widgets of a web page.
VR is based on the creation of such objects, and C# is the object-oriented programming language driving much of the VR world.
Spacial OS
Read
Spatial OS is a cloud-based platform that hosts collaborative applications built by game developers. It was created to expand beyond the limits of a single server. It’s a distributed operating system with the power to host massive simulations thousands of times bigger and more powerful than what a single computer is capable of building or hosting.
Developers log on and interact with it as an online platform, downloading tools that can be integrated with game engines like Unity and Unreal. Once the world has been built on the developer’s local operating system, the application is packaged and pushed to Spatial OS. With the code hosted and available to other developers on the cloud, it can be grown to massive scales.
Think of the content that exists in Spatial OS to have similar properties and functionalities as the physical world. Users log on and travel to these simulated worlds. However, unlike games and worlds that exist on your home computer or gaming system, Virtual simulations hosted on Spatial OS exist and evolve even when you’re not logged in. As with the real world outside — let’s say on the street outside your house — if a tree falls or a new car parks along the curb while you’re asleep, that new information is available to you when you walk outside the next morning. The same principle applies when you return to a world hosted on Spatial OS.
Massive projects like public transportation construction or renovation in major cities can be first simulated to scale in Spatial OS before being applied in the physical world.
3D Audio
Read
This a technology that presents sound to the human ears in a manner resembling the auditory qualities interpreted from the natural world.
Stereoscopic playback systems (headphones and speakers) emit sound from a single point in space. When you move your head while wearing headphones, the sounds move with you. Yet when you move your head in relation to sounds of the natural world, the location of the sound source remains fixed. In the case of stationary speakers, the sound remains fixed but unidimensional.
Without technology, human ears perceive sounds from an infinite number of sources and locations simultaneously. Because the ears are positioned on different sides of the head, sound waves reach one ear at a slightly different time and with slightly different properties than when they reach the other ear. Much like having two eyes enhances our ability to see in three dimensions, the same is true for the human auditory system. Amplitude, frequency, and timing differences reveal to our ears the specific location of a sound, which direction it’s coming from, and even the properties of the space in which it’s being heard. The most pivotal factors relating to this dynamic auditory perception are the physical properties of the human ear. It’s oval shape with varying coves, curves, and suppleness all contribute to the way it receives sound waves and the way the brain interprets sound.
In order to recreate this sound interpretation with modern technology, VR studios are capturing sound with microphones that resemble the shape of the human ear. These mics record sound not through a flat or circular device but with respect to the natural contours of the ear. When the recorded sounds are played back, they’re more dynamic. They’ve been enriched by the same intricacies as the organ that receives and delivers them to the brain. When hearing 3D audio through a pair of headphones, the various sounds may seem to crawl from one ear to the other, come from 10 feet in front of you, or bleed in from a distance.
This the technology is not a new realization. Through the 20th century (and most of the 21st thus far) there has not been a demand for 3D audio, as visual content has been almost exclusively 2D. The emergence of 3D imaging in Virtual Reality is now calling for sound technology equally as dynamic.
Game Engine
Read
A game engine is the software environment where computer developers build interactive 3D experiences. Game engines combine three elements: graphics, audio, and logic. Think of them akin to the factory in which a vehicle is constructed — where all the necessary space, tools, and engineering platforms exist. In the current Virtual Reality climate, most developers choose between the two most powerful game engines.
Unity
Unity supports the construction of both 2D and 3D experiences for computers, consoles, and mobile devices. It was first revealed at Apple’s Worldwide Developer’s conference in 2005. Since then, five major versions of it have been released and more than 100 of the most well-respected experiences in the gaming industry have been created inside.
Unity is now free for download, making it more accessible for anyone in the world to gain access to high-end VR development tools. This has been huge step in the growing ecosystem of VR coders across the globe.
While there are features available for non-coders, understanding how to read and write in one of Unity’s supported programming languages is the ticket to maximizing its potential. Unity supports three programming languages, yet C# (Cee Sharp) is preferred by most professional developers. C# is an object-oriented language, making it compatible with the three-dimensional relationship among objects in this Virtual Space.
A new feature released in Unity this year is enhanced texturing, allowing developers to create more detailed replicas of complex physical world objects. The surface of a rock, for instance, with its infinite nuances, is difficult to replicate. Unity’s new texturing feature allows developers to create more life-like visuals of such complex surface.
Unreal
The decision of which game engine to use coincides with the creator’s existing skills. Are they a coder or a designer? As a coder, the creator will write in one of the languages supported by the game engine. Unreal supports a language called C++.
While it’s not always the case, coders may be more likely to work in Unity while designers may gravitate toward Unreal’s “visual programming.” Instead of writing scripts, the designer places modules in an open area within the software interface.
Regardless of the game engine in use, creators work within many “frameworks” built into the engine. During the creation of the 3D experience, the creator may want to generate a similar (or even identical) outcome at various points throughout the experience. Perhaps the user’s movement should lead to the same outcome regardless whether they’re in Virtual New York city or Virtual San Francisco. Building frameworks is like building bridges. Once the bridge has been constructed and finalized, traversing that body of water in the future becomes standardized, saving time and energy.
Access to these frameworks is one of the great appeals to industry-leading game engines like Unreal. However, there are cases when the developer may want the flexibility to work outside such parameters or even build their own frameworks. Understanding this, Unreal makes all of its source code (the lines of code written to build the game engine itself) available to subscribers. With this access, the community of Unreal developers has created documentation to help other coders work through the inevitable hurdles of programming in Unreal.
Physics Engine
A physics engine allows us to construct the physical laws of an XR experiences. The behavior of light, rain, the laws of gravity, and the relationship among objects are all programmable in a physics engine.
Many virtual environments are centered on the movement of human avatars. We must build their virtual capabilities and restrictions. When constructing an environment of virtual football players, for example, we assign certain properties to each player. X player weighs Y pounds and has the ability to jump Z height.
For many virtual creators, it’s important that the constructed environment abides by basic real-world parameters. For example, when the user approaches a wall in a virtual space, the physics engine is the tool used to determine whether the user collides with the wall or passes directly through it.
“Unity” supports the most dynamic physics engine.
XR Use Cases
This section highlights the industries in which XR is being applied, providing specific use cases for past programs and imagery of what’s to come.
Education
Read
Like the media platforms before it, Extended Reality will continue merging with educational systems around the world and advance visual and auditory learning.
The industry is implementing bar code stickers for the interior of academic textbooks. Scanning the code with an AR kit brings information off the page. This visual learning also reduces the cognitive load required of school teachers.
In a test run for this program students learned basic anatomy of the heart.
Medical
Read
Stanford Medicine is using a software system that combines imaging from MRIs, CT scans, and angiograms to create a three dimensional model of the patient’s brain prior to surgery.
Inside these 3D renderings, surgeons wear the headset and step inside a model of the patient’s brain. It’s a pre-op tool that allows for customized planning. Interaction with the three dimensional images enhances preparation and improves accuracy.
“We can plan out how we can approach a tumor and avoid critical areas like the motor cortex or the sensory areas. Before, we didn’t have the ability to reconstruct it in three dimensions; we’d have to do it in our minds.” -Gary Steinberg, MD, neurosurgeon, PhD.
Immersive Art
Read
Artists are already creating and displaying in virtual reality.
Google’s “Tilt Brush” is a Virtual painting program. Stepping inside, artists select various brush strokes, hues, and implements for designing 3D models. In Tilt Brush, and similar customized programs, graphic designers are learning how to design 3D models in a 3D space, enhancing the work they’ve already done in programs like Blender and Maya.
In addition to the creation of 3D models, artists and businesses are displaying artwork in the virtual world. The work of Photographers, sculptors, designers, and other visual artists are on display in virtual art galleries. See Infinite Gallery.
Conferencing
Read
Despite thousands of physical miles that may separate business associates, virtual chat rooms allow folks to be together in the same virtual space. Thin, flexible fibers with glass core light signals (fiber optics) send data and information at a rate of 50Mps. Our movements and voices are read and replicated, so recipients can experience these behaviors through simulated software. We’re now calling this technology “Virtual Reality.”
As VR becomes more prevalent in business, it will replace video conferencing. Instead of seeing the image of a colleague’s face on your 2D computer screen, you’ll put on the headset and join them in a virtual conference room, hearing their voice in 3D audio and using Virtual controller commands to trigger interaction and demonstration.
Travel & Tourism
Read
National ministries (tourism/trade divisions) are developing content that lets us soar through the sky like an Olympic ski jumper (Korea Tourism Office, ~$100k), hang out backstage with Sir Paul McCartney (Visit Britain, ~$1.5mm), and swim the crystal clear Caribbean alongside stingrays (Caribbean Island, ~$300,000). These organizations are finding that immersive media engages travelers and influences agencies more effectively than any media prior, with more robust analytics to prove return on investment.
«Before, travelers just had a brochure or information on the website to inform their choices. Virtual reality allows them to get a true sense of the hotel and the excursion they can go on. It’s been a real game changer for us all.»
Marco Ryan, Chief Digital Officer, Thomas Cook «…Virtual reality let’s our travel trade and media partners experience our destination in a new and unique way that has not been possible before.» -Marsha Walden, CEO, Destination British Columbia.
Auto
Read
Through various holiday sales and new vehicle features, the automotive industry works to attract buyers to the showroom. Virtual Reality brings a three-dimensional automotive experience to the buyer’s home. Inside their VR headset, the user is able to interact with the vehicle and even sit in the driver’s seat.
First we capture dozens of photos from various angles of both the vehicle’s interior and exterior, a process known as photogrammetry. Once the images are captured, they’re arranged (or “meshed”) together in a 3D software to be exported and made compatible with VR hardware.
The end result: the user can sit inside the car and walk around the exterior of the vehicle. Once they sit behind the wheel, they’ll be able to adjust the seat and mirror before turning the key and taking the car for a simulated test-drive through the streets of any city or town in the world.
Gaming
Read
In its earliest days, Virtual Reality was predominantly a home for gamers. Classic video games like “Doom” have been remade for VR, while new games like “The Price of Freedom” are expanding the concept of VR gaming.
The software programs most commonly used to build Virtual programs (game engines like Unity and Unreal) were first a platforms for building video games. These game engines are now evolving to build all forms of Virtual interactivity.
VR is different from most media forms that have come before it. Unlike newspapers, books, radio, and television, VR encourages the user to be part of the content, using their body and mind to influencing the information surrounding them. This is a principle first mastered in the gaming industry.
We continue to source the expertise of video game programmers to realize VR’s potential.
Fundraising
Read
Allow the immersion of Virtual Reality to draw a stronger connection between the donor and those who are in need of their assistance.
At the root of fundraising is the empathetic connection that encourages funds to change hands. Take the example of the Wounded Warriors Project. With a mission of offering a variety of educational, health, and employment programs to veterans, the foundation relies heavily on donations from a variety of sources. Often these donations are solicited at events that attempt to communicate the journey, mentality, and some of the post-combat struggles that afflict soldiers across the country.
For those who’ve never been to war, one of the most incomprehensible tasks is truly understanding the journey of a soldier. Advanced technology does not bridge this gap between civilians and soldiers. However, it can increase the likelihood of striking an empathetic connection.
At a fundraising event geared toward raising money for Wounded Warriors, Virtual Reality can take the donor inside an immersive experience that reflects some of the perils and post-combat ills that plague soldiers. Virtual Reality can digitally put the donor in the place of a soldier as the say goodbye to their family, arrive oversees, and ride through foreign terrain toward combat. Inside the headset, the donor can also experience the elation of returning home from war and the devastation of rehabilitating some of the life-altering injuries sustained by these men and women.
Hiring
Read
Even some of the most successful companies struggle to hire the right employees. The applicant creates a resume from their most attractive accomplishments and prepares their best presentation to display during the interview process.
Virtual Reality provides an opportunity to evaluate the psychology of the applicant more explicitly. By presenting a Virtual component to supplement your company’s interview process, you’ll ask the applicant to enter an interactive Virtual World in which they’ll go through a series of short prompts. Through advanced psychological studies, we’ll help you understand how the applicant’s interaction with the Virtual prompts reveals elements of their mentality, learning style, and competence as related to the particular job opening.
Data collected from their spatial behavior, interaction with various objects, and reaction to varying colors will help companies better evaluate applicants and ultimately reduce costs in their HR department.** XR Use Cases
Industry Insider
This is an inside look at the extended reality industry – its market share, history, and potential. We also explore the way XR is being cultivated as a technology and as a storytelling platform by the most powerful brands in the world.
Apple in XR
Read
In an interview last year, Vitalik Buterin (creator of Etherum) was asked, “How would you describe Etherium to the average person?”
His answer: “There are two kinds of average people, the average person who has heard of Bitcoin and the average person who hasn’t.”
When considering the evolution of Virtual Reality, the division feels more related to a moment in time. There was the Virtual Reality before Facebook bought Oculus in 2014 and there is Virtual Reality since that 3 billion dollar investment.
“Before the acquisition there were a few companies that believed in VR. And when I say a few, I mean a few,” Palmer Luckey (founder of Oculus) told Re/code in 2016. “After that acquisition happened, I think it was a signal to the rest of the industry that VR was here. This is gonna be a huge thing and if you didn’t invest in VR now you were gonna get your ass kicked down the line. That’s how you wake the giants.”
The Giant
Since 2014, Google, Microsoft, Sony, and HTC have all either upped their investment or joined the VR movement. Yet there’s one giant who has remained on the periphery, and with the Extended Reality industry still humming in the aftermath of the new Magic Leap headset and the 5th installment of the Oculus Connect conference in California, we’re asking…is there another giant ready to augment the XR world?
This month Apple released the next iPhone: Iphone XR.
For the past few years (since the Facebook/Oculus marriage) XR has been the all-encompassing term for referencing Virtual, Augmented and Mixed Reality. The collective definition for “XR”: technologies that add digital enhancement to our visual perception through the use of head-mounted displays.
Before the September 12 Iphone XR announcement, an “XR” Google search yielded companies, websites, and publishings centered on “Extended Reality.”
Since then, an “XR” search brings a stream of Apple content.
Unlike the other giants, Apple is not publicly invested in the development of Virtual Reality. The infamous “MacRumors” website has maintained for months that “Apple is rumored to have a secret team of hundreds of employees working on virtual and augmented reality projects.”
What we do know is they are heavy involvement in Augmented Reality (the use of goggles to digitally overlay information onto our visual perception of the physical world). Although the Cupertino giant hasn’t yet released their own AR hardware, they have become one of the leaders in consumer-level AR software with their easy-to-use AR Kit.
Here’s a breakdown of Apple’s XR activity.
AR Kit
The AR kit has been available on multiple devices since IOS 11 released last fall. The applications range from visualizing Ikea furniture in your own living room to a guided map of an American Airlines terminal at flight time to the anatomy of the human body. So, Apple is creating XR software, yet, unlike the other giants, they haven’t released a piece of hardware.
AR Glasses
There are reports of a coming release. Check out this recent job listing where Apple is seeking to hire a 3D user interface engineer to “drive the next generation of interactive experiences for our platform. You will work with some of Apple’s most advanced technologies including the Augmented Reality (AR) and Virtual Reality (VR) support offered in ARKit and Metal 2.”
Reports
An earlier report suggested Apple could release AR glasses by 2020. Apple’s headset could feature an 8K display for each eye, offering a more realistic experience. Apple may be waiting for display and chip technologies to mature before releasing its headset. A previously uncovered Apple patent revealed that the company is investigating AR lens technology. Apple’s research calls for a compact lens array to help focus light and eliminate chromatic aberration effects.
Unlike some current AR and VR solutions on the market today, Apple’s implementation will reportedly not need trackers or cameras.
“We have been and continue to invest a lot in this,” Apple CEO Tim Cook said in a 2016 interview when asked about the technology. “We are high on AR for the long run, we think there’s great things for customers and a great commercial opportunity.”
Apple AR glasses may be part of this investment. According to a growing number of rumors leaked by three alleged Foxconn employees, the internal development is apparently known as Project Mirrorshades.
Apple XR Patent
Apple also has a registered patent for the use multiple lenses to achieve the same effect as larger headsets, arranging them into what is known as the “catadioptric optical system.” More commonly seen in telescopes, this arrangement is a compact way to focus light, and helps to eliminate the colors that can sometimes be seen on the edges of your vision in VR or AR — or “chromatic aberration.”
The XR movement has been on the rise for decades. It got louder in 2014 as Facebook single-handedly accelerated the industry. It feels a bit peculiar that Apple has not been a front man in the XR world, unless they are operating on the other side of the curtain.
The Next Spielberg
Read
From the outside Virtual Reality must seem…so far in there. So far into this digital world that’s become part of us all. From the outside VR must fall somewhere among time travel and an embodied internet. In other words, somewhere in the future. In the big cities you’ve heard about VR or have a friend who does “something with VR.”
But the future has become a tough concept to pinpoint. The idealist hears autonomous vehicles on the road, Alexa speaking back to him, and conversations about human colonies on Mars, yet the pragmatist knows Uber got suspended from testing autonomous vehicles in Arizona after a fatal crash last month, speaking to Alexa is sometimes like speaking to a 3-year-old, and there are no plans to put a human being inside a spacecraft (let alone establish a society on a planet that would take several years to reach).
Virtual Reality is right in the middle of this search for the ground between human imagination and human capability. Google’s two most ubiquitous VR programs: Tiltbrush (in which you draw and paint inside a three-dimensional space) and Google Earth (in which you walk the streets of any city in the world) provide a taste of how vast this technology will become. In these early programs you begin to understand how we’ll one day educate children with this tech, test architectural structures before building them in the physical world, and reunite with deceased family members. The ideas are real, yet realizing them still feels off in one of those years that doesn’t yet read like a year — somewhere like 2045.
On the inside of this industry we’ve speculated about what needs to happen in order to move this tech to the forefront. We’ve worked to help it shed the stigma of being the next gaming home for 17-year-old techies who stay up all night firing first-person shooters. On the inside we’ve seen firsthand VR as a catalyst for human interaction, education, and global exploration. It has become our work to make all this imagining attainable.
So what does need to happen? Well, some of the wheels are already in motion. Even before Facebook, Google, and Microsoft invested heavily in VR, there was a writer. Ernest Cline. He was 38 when Crown Publishing printed his first novel, “Ready Player One.” The next day Warner Bros. bought the rights to convert the script into a film, hiring Cline to co-write the screenplay. Nearly a decade later, the movie has arrived in theaters, directed by the most famous name in film — Spielberg.
The story begins in the year 2045 when — resulting from global warming and the depletion of fossil fuels — the world is mired in an energy crisis. The OASIS is where many folks go to escape their decimated surroundings. It’s a Virtual World accessible with a visor and enhanced with haptic technology. In the opening pages of the novel, the richest man in the world — James Holiday — has passed away, leaving behind a video message. He announces to the public that he’s hidden a golden egg inside OASIS, and the first person to discover it will inherit his wealth. Teenager Wade Watts is the main character.
It arrived in theaters on March 29 and grossed roughly $50 million on its opening weekend (only Black Panther and A Quiet Place grossed more on opening weekends this year). By the end of its third weekend Ready Player One earned a combined domestic/international gross of nearly half a billion dollars, making it one of the top 10 grossing Spielberg productions since we learned his name in the summer of 1975, with Jaws.
Ready Player One (the film), an action/thriller, is not meant to update the public on the current state of VR. So…the industry isn’t wealthy enough to attract the most highly trained coders and push the cutting edge of computer science. Secondly, just like the futuristic industries mentioned at the top, we don’t yet understand how this will influence the human mind. Remember, this is a technology predicated on immersing people in a digital world. What happens when human eyes perceive a new reality, when movement of your arms and legs pushes you deeper into a manmade environment? What influence does it have on the brain, the psyche, and our perception of reality?
It’s not the first time technology and storytelling have come together to influence the mind. While we’re on the topic, let’s take a look back at Spielberg, who — as a 29-year-old director in 1975 — created a new world and called it Amity Island, the setting of Jaws. He brought viewers to another place and with it ushered The New Hollywood Era. I remember waking up in the middle of the night screaming for my sister to get out of the water. The anticipation of the shark’s arrival, the terror among the people on Amity Island, the sound of the bloody ocean, and the feeling of raw danger circling me. It created an affect on the human brain. And we’re talking before computer graphics. When the shark was mechanical, the film schedule shot according to the Cape Cod tides, and the music there as a warning.
Considering that, it seems fitting that Spielberg has now helped move the needle forward on the next version of immersive storytelling. We’re a long way from this Virtual space being a refuge from a fossil-fuel-depleted world, but we rarely turn to Spielberg for practicality. We often turn to him for imagination. The question is so tangible that it’s become tantalizing…what is possible in Virtual Reality? If not a new world to rescue us from global warming, then…?
For hundreds of years we’ve asked the writers, the directors, the creators to show us their vision and help provide insight to these very questions. So consider for a moment that two of the most commercialized storytellers in action today — Ernest Cline and Steven Spielberg — are not storytelling in VR. They are storytelling about VR. Until that changes, Virtual Reality stays in 2045.
Back from the Olympics
Read
A lot more people have asked about VR over the last couple of weeks, mentioning they “heard something about it” on the Olympic coverage but didn’t know exactly how it had been used.
Because fewer than 10 million people have purchased VR headsets, awareness of the technology and its programming is limited. So I’m going to use a page or two here to help you understand what the Olympics did with VR headsets and, more importantly, what it didn’t do.
Who was Involved
NBC Sports created its own Olympic VR app and worked with Intel and Olympic Broadcasting Services (which produces video of the Olympic games) to stream live event coverage to a range of VR headsets. It’s the second time NBC has included 360 video in its Olympic coverage but the first time they streamed live events.
What hardware was Included: Samsung Gear VR, Google Daydream, and Windows Mixed Reality headsets.
Coverage
NBC offered 50 hours of 360 video coverage during the 2-week event. And because stats mean nothing without relative stats against them, consider that NBC broadcasted 2,400 hours of 2D (television/computer) screen coverage of the games (the most ever). Among the events streamed in VR headsets were curling, snowboarding, bobsledding, and ski-jumping.
Did you have to pay
Yes and no. Downloading the NBC Olympics VR app was free. Then you needed to enter your cable provider and password.
What did it look like
Compared to high definition on a 2D screen, the 360 video in VR was grainy. Some events were offered in 180 (basically meaning you watched a 2D screen inside the Virtual space).
What content was the best
The Opening Ceremony in VR was pretty cool (it’s such a dynamic event with a portion of the show emerging from the stands and a variety of lights and colors). Getting the chance to view the ceremony in VR did provide the feeling of being there. You got to look all around the stadium, hear the moving crowd, see the energized environment, and focus on whichever portions of the show interested you most.
What need to improve
The quality of 360 video. Compared to the HD quality of your 2D TV or computer screen, 360 video falls well short. Additionally, it’s crucial to realize that the Olympic content was marketed as “The Olympics in VR” but it was actually “The Olympics in 360 Video” — inside a VR headset.
The Difference Between VR and 360
The Difference Between VR and 360 Video What makes VR such an incredible technology is the interactivity it provides. For the first time, humans are able to interact with (actually reach out to touch, move, and have an influence on) the digitally immersive world surrounding them. This foundational element is what’s garnered massive investments from Facebook, Microsoft, Google, Samsung and others. “The Olympics in VR” included none of this impact — excluding interactivity among users inside the digital world and interactivity with the digital world itself.
Bottom Line
The reason “The Olympics in VR” was actually “The Olympics in 360” is two fold.
The type of headset that’s capable of supporting 360 video is more affordable than the full VR rigs (and, thus, far more prevalent in the US). Creating truly interactive and high quality VR content to cover an event like the Olympics would be far too costly and require far too many resources to generate an ROI (again, considering how few people own the hardware and thus how few people would have the capability to interact with the content). Wrapt it up:
Credit to NBC (and the others who were part of it) for getting out there and experimenting with the new technology. But if you missed out on the 360 coverage this year…you’re probably just as well off checking back in with the coverage in the summer of 2020 in Tokyo.
The Father of VR
Read
I remembered that an old guy on the back patio of the coffee shop in Red Hook mentioning Morton Heilig as “The Father of VR.”
So, I spent last night reading about him and what he brought to an industry that is, more than half a century later, still budding.
Born in 1926, Heilig established himself as a cinematographer, using that background to eventually develop and patent two pieces of technology: “the telesphere mask” and the “Sensorama.” He and his partner began the development of these machines in 1957, patenting them in 1962.
A bulky piece of technology shaped like an old-school arcade game, the sensorama allowed the user to sit on a chair and lean their head into the equipment — kinda like you would the vision machine at the optometrist. One of the first experiences available in the Sensorama was of a motorcycle ride through Brooklyn. Heilig attempted to incorporate all the senses and draw the viewer into a cinematic experience — a very similar description of what we understand VR to be more than 60 years later. He referred to it as “Experience Theatre.”
He published a paper in 1955 called the “The cinema of the future” in which he detailed a multi-sensory theatre experience. The first text that appears inside the document reads, “Thus, individually and collectively, by thoroughly applying the methodology of art, the cinema of the future will become the first art form to reveal the new scientific world to man in the full sensual vividness and dynamic vitality of his consciousness.”
After this writing he went on to create the sensorama and five short film displays. The machine still functions to this day.
Eventually, Heilig said he wasn’t able to capture high enough quality images from 35 mm film cameras in order to create an immersive experience that was marketable enough to the general public.
Off the Ground
Read
This Is The 2nd In a Nova XR Media Multi-Part Series
As we study the way a new technology progresses through the Adoption Lifecycle, we realize the innovators have already captured Virtual Reality.
Geoffrey Moore describes the innovators as a group of technology enthusiasts who appreciate the tech for its own sake. They don’t need to believe it will break through in the market nor that it holds the potential for greater achievement. The innovators savor in the technology for everything it is — whether it’s the smooth texture of the software or it’s painfully slow operating speed.
With Facebook having committed a multi-billion-dollar investment and MIT using Facebook’s new VR headset to allow humans first person control of robots, the innovators have their hands on Virtual Reality.
Tech Adoption Chart displaying Innovators, Early Adopters, Early Majority, Late Majority, Lagards
The early adopters come in behind the innovators and bring their vision. Moore calls them, “That rare breed of people who have the insight to match up an emerging technology to a strategic opportunity, the temper to translate that insight into a high-visibility, high-risk project, and the charisma to get the rest of their organization to buy into that project…the core of their dream is a business goal, not a technology goal.”
Well, as we approach 2018, VR has extended it’s reach all the way to Lowe’s, who’s incorporated a Holoroom to allow customers immersive experiences such as shifting the paint color on the walls of their new room. This means homeowners who may work in any range of industries have experienced VR inside a building operated by one of the strongest brands in the country.
While reaching this level of audience is an advanced step for the technology, it’s an indication of Lowe’s reaching right to left across the chasm and becoming an early adopter.
We see evidence of this with CNBC’s report that estimates Facebook sold fewer than 400,000 units of their Oculus Rift in 2016. Remember, the early majority represents one third of the market, so if the largest tech giant didn’t break ½ million sales in its first year, we understand that VR has yet to cross over.
Crossing the Chasm
Read
This is an introduction to Nova Media’s multi-part examination of Virtual Reality’s journey toward critical mass in the consumer market.
VR has a long and fragmented history dating all the way back to Morton Heilig in the 1950s. Then there was a simulation of Aspen, Colorado that came out of MIT in the late 70’s. A-decade-and-a half-later Sega announced the release of their first VR headset for an arcade game.
All of these…just a few of the breakthroughs that have led to predictions about how and when VR will make its significant impact on the consumer market.
But as the years passed we heard just a faint noise from this new virtual world playing in the background, often drowned out by HD TVs, smartphones, and social media.
The noise got louder, though, in 2014 when Facebook acquired Oculus and its new VR headset for over $2 billion. And then — at Oculus’ annual conference last month — a louder noise from Mark Zuckerberg, who said, “I am more committed than ever to the future of VR.”
So now, three years into Facebook’s involvement with this emerging technology, we’re still asking the question: when will VR reach critical mass?
In 1989 Geoffrey Moore wrote the first version of “Crossing The Chasm”, studying why, how, and at what rate new ideas and technologies spread through the market.
His book, which emerged in its third version three years ago, studies the tendencies with which young technologies progress through the adoption life cycle. The writing focuses on how these products often wind up stuck in the divide between the early adopters and the early majority. This can be a deadly place for a new technology to try and survive, as the early majority makes up 1/3 of the market.
Tech Adoption Chart displaying Innovators, Early Adopters, Early Majority, Late Majority, Lagards
This series will examine where VR stands in relation to this curve, what’s prevented it from breaking through, and what will need to happen in order for it to cross the next divide.
World Peace
Read
Virtual Reality will democratize experience. Consider how the internet democratized information. The internet has globalized us. We communicate, share, and do business with people around the world – most we’ve never met in person. We may have never seen their face or heard their voice, but we know them.
XR empowers remote access to in-person experiences through shared virtual environments.
If we really knew what it was like to walk in another’s shoes, we’d be humbled. If we visited the places our governments declared as enemies, we’d ask more questions. If we knew first hand the horrors of war, the inhumanity of greed, and the glory of love, we’d live differently.
Virtual Reality will democratize experience and facilitate peace.
Nova Team
XR Production
N0va 3D
Virtual Reality Development
Via Barberia 13, Bologna Italia 40123
Van Brunt Street, Brooklyn, New York 11231