Although pre-orders for Oculus Rift and the HTC Vive virtual reality headsets have already sold out for the spring, the success of these devices rests on the quality of content that is made available. To this end, both Epic’s Unreal Engine and Unity have revealed tools for developing for virtual reality. But now there’s a new player looking to take game development to the next level: MaxPlay.

MaxPlay Game is a “future-focused” cloud-enabled development suite that allows development teams to collaborate more efficiently to create games for every platform. That includes consoles, mobile devices, and emerging platforms like virtual and augmented reality. The company announced a set of premium technology partners like Nvidia, FMOD Studios and EMotion FX. MaxPlay will also be demonstrating its tools at GDC next week.

[a]listdaily talks to Sinjin Bain, CEO at MaxPlay, to discuss creating content for virtual reality.

sinjin_bainWhat is MaxPlay?

MaxPlay was founded by a group of game industry and enterprise cloud software veterans. Our company is founded with the purpose to provide a next generation game and interactive development environment for all platforms. We so much experience in making and deploying content, and we don’t have any legacy architecture, so we provide a fresh look at the existing issue developers have.

We asked, “How do we enable developers to ‘get to fun’ faster, and make more money?” And focused on making that available. Our architecture enables real-time collaborative development and provides developers with insight and analytics, so that they can see telemetry in the development environment in meaningful ways to impact what they’re doing.

We also have very compelling, forward-looking, runtime architecture for rendering and running games, which has big implications in terms of virtual and mixed reality. We’re building an environment that’s going to be a paradigm shift in a very positive way for developers.

MaxPlay is described as a “future-focused” game development suite. What does that mean?

It means our architecture is built for the future. We’ve constructed an open architecture that lets developers write plug-ins in very easy ways. Developers can go in to modify and extend the platform as new opportunities present themselves. For example, nobody really understands what super-compelling virtual reality experiences will be.

Developers will need to be able to modify their platform to be more creative in a new medium. The same goes for telemetry. You don’t know how something like eye-tracking telemetry will be used, but we know that it’s significant, so we need to get it to developers so that they can use it in meaningful ways. This and other features make up future enabled implementation.

We can also take advantage of four, six, eight, twelve core hardware that’s here and coming that enable the multi-view and multi-depth rendering that is required for VR and AR. Although today’s mobile devices, PCs and consoles are functionally multi-threaded, the existing technology doesn’t take full advantage of today and tomorrow’s hardware.

How does partnering with companies like Nvidia and FMOD help future-focused development?

As an example, FMOD is developing audio software, and the ability to plug-in, distribute and modify that software is seamless with MaxPlay. It enables the developers to access and use the audio in new and creative ways that don’t exist in standard platforms. Intel is coming out with chipsets that have more cores. Our architecture lets very sophisticated developers reserve a core for specific reasons, like processing audio. Several content creators like Fox and Technicolor believe that audio is an important aspect of virtual reality media.

In short, we can combine our partnerships to use our architecture to do special things, like more audio events, and take fuller advantage of the technology FMOD provides without degrading performance. We can’t predict what developers will need exactly — we just know that they’ll want to do things in VR and AR that they can’t do today. We’re future aware enough to provide them with flexible and creative tools to help them this year, the year after, and the one after that.

What challenges do developers face when working with emerging technologies like VR?

I think that it will be the transition from single screen games to VR and AR. You’re fundamentally changing the way users are experiencing and consuming content. That might mean that developers have increased pressure to explore that new experience in ways that they haven’t thought of before.

You need to be able to iterate and fail faster. We’ve developed an environment in which developers can do things quicker, work tighter as a team, and experience what they’ve done faster, so they can see what doesn’t work and move on to something different. I think the difficulty in moving into new creative spaces is that it puts more pressure on iteration time, so you need tools that help you do that better. That’s fundamentally what we at MaxPlay are providing.

How long do you think it will be before we see mass adoption of VR?

I think mass adoption is a couple of years out, but then you look around and see how Samsung is bundling the Gear VR with their phones. There are some very big players invested in accelerating hardware adoption and accessibility. That’s a really good sign for consumers. Then it’s a matter of how fast great content gets developed.

All the ingredients are coming online right now to enable very fast adoption of VR. You have the fact that VR can be done on current generation phones in a meaningful way. Then you have companies like Facebook, Samsung, Google, and Apple investing. So, I think we’re going to see numbers in 2018. We’re learning how to make great content in 2016 and 2017, and developers are excited about this platform.

There’s still a practical reality of getting a broad consumer base, but I think people are going to be surprised in 2018.

MaxPlay is relatively new to the scene. How would you say it compares to existing tools like Unreal Engine or Unity?

The way we would compare ourselves is that because we don’t have legacy architecture, we’ve been able to bring a lot of expertise and fresh thinking to the space. So, I think our strength is in our open architecture, and how it provides more ways for teams to work together so that they can be more productive and creative — with superior performance on devices.

It’s a big industry that’s growing rapidly. There are developers that will want to stick with Unreal or Unity, and then there are people who will see what we’re doing and try our tools.

What do you think developers should keep in mind when starting out, especially when there are so many diverse platforms?

I think a developer needs to keep in mind that, when they pick tools and technologies, they need to look and understand how the flexible the platform they’re investing in is. What does it let them do out of the gate, and what does it let them do as they are developing their product? How do their tools and technologies enable them to do their jobs, and where do they get in the way?

Another thing I would focus on is how it’s a big world out there, with different types of communities. How do their toolsets and platforms enable them to work with other people in productive ways. People who might not necessarily be in the next office, but could be across town or in a different state or country. Teams are distributed now, and that’s sort of the new reality. You need to make sure, as a developer, you’re plugged into that world.