What New York City will look like in 2020

By Gaspard Giroud - 10 May, 2017 - 3ds Max

Our leap from visualization to virtual reality

Guest point of view from Gaspard Giroud, co-founder of New York-based marketing and experience-design agency, Piranha.

“Your architectural visualization is amazing. Now can we see it in a real-world context?” That’s probably the top request I’ve heard from clients over the years.

We do a lot of aerial filming for architecture at Piranha, so we’re always up to the task of capturing original footage and integrating a CGI version of our client's building into the landscape. But given the rapid adoption of virtual reality, it also spawned the idea for an entirely new business venture. This is the story of Garou, and our journey from creating 3D visualizations to developing immersive architectural VR experiences.

But first, a bit of stage setting. Garou is our virtual reality platform. It started out as a passion project that evolved into a platform that embodies our vision of the future of VR. The name Garou is a tribute to a 1950s French novel, “Le Passe-Muraille.” It’s about an average guy with an average administration job, who one day realizes that he has a gift to walk through walls. He starts testing limits by robbing banks and jewelry stores, leaving his calling card behind in chalk: “Garou Garou.” It’s a fitting moniker, since our project's tagline is “There are no walls” (with a secondary tip of the hat to The Matrix).

Image courtesy of Piranha.

Wouldn’t it be amazing if…

The New York City skyline is on the verge of dramatic, irreversible change with the construction of many new skyscrapers on the south end of Central Park. But how would this shape the city in the near future? This was our guiding concept.

Which led to our idea of making a film about what New York will look like come 2020. We started by first shooting aerial scenes to establish the current cityscape, making way for the addition of future skyscrapers and buildings. At the same time, we began toying with the idea of creating a VR experience. We asked ourselves: “Wouldn't it be even better and way more interesting if you were able to actually walk around that new skyline?”

Image courtesy of Piranha.

Image courtesy of Piranha.

The practical aspects proved to be a lot more challenging. Mostly because we had to figure out how to actually get the work done! We’re all creatives with a solid grasp on technology, but none of us had game development experience so we didn't know how to approach a project like this. As you can imagine, we faced with a learning curve and were in for a few surprises along the way.

The honest reality of virtual reality

The first part of the process was something we're very familiar with: creating high-end models for visualization. We quickly discovered that there’s no quick way to move from a 3D model to a VR experience. This was compounded by the fact that VR environments are rendered on the fly, so it's crucial that you have an environment that is as optimized as possible.

The polygon problem

So there we were, faced with a model of New York City, filled with hundreds of towers with corresponding millions upon millions of polygons. In short: a big optimization challenge. When you're creating for real-time architectural VR, you need to switch gears to think like a game developer and make the polygon count as light as possible, while lightening the texture load. If you don’t factor that in while modeling from the get-go, you'll have to go back and clean it up later. On a small model, that may not be a big deal. But when your environment is New York City, it’s huge!

Image courtesy of Piranha.

Nobody's a VR expert

What's interesting is that since everybody is still trying to figure out VR, it's difficult to find an expert, let alone stay on top of all the latest developments in tech.

Once we prepared our files in 3ds Max for VR, we imported them into the game engine. We’re working with two game engines: Stingray and Unreal. Now, if you've already taken the leap into VR, one thing that many architectural visualizers may be familiar with is the hassle of having to retexture everything when bringing it into an engine. We've been able to bypass this issue using the Stingray engine, which has a nodal system that allows you to connect code in order to action the desired interactivity that you want, making texturing less time-consuming.

Image courtesy of Piranha.

Once we had the "future" visuals figured out, we had to combine them with NYC as it stands today. We used six GoPros rigged on a pole below the helicopter to shoot the aerial footage. This allowed us to shoot a spherical image, which we then brought back into Stingray for use as a back plate.

Image courtesy of Piranha.

Image courtesy of Piranha.

The Garou Experience

When you're experiencing Garou, you view the city from the perspective of a giant walking through the streets. You can visit certain hotspots, such as Times Square, where you trigger videos that were shot in that very location.

Image courtesy of Piranha.

When you get close enough to a building, you’ll see its label pop up, and you’ll be able to click yourself into that building. We're currently producing five experiences consisting of Central Park, a museum, a helicopter tour, the Oculus at the World Trade Center, and we’re even putting people on top of the Brooklyn Bridge. After that, we’re going to invite partners to help fill this future New York by adding their projects in Garou, allowing people to explore their locations in VR.

The challenges facing VR adoption

One of the obstacles we foreshadow with getting our platform out there - and with VR in general - is the lack of accessibility. We're aiming to be platform agnostic so that we can reach as many people as possible since everything relies on adoption.

I see three challenges that need to be addressed in order for VR to thrive. The first is that VR devices need to be easier to use, cheaper, and more powerful. Head-mounted displays need to be smaller and lighter, since you can't be uncomfortable and tethered to a computer in the long run.

Great content is also a necessity. If there's nothing interesting out there, why would anyone hop on the VR bandwagon? High-quality visuals and strong content, coupled with growing technologies such as AI, eye tracking, and facial recognition, will lead to the future of VR and beyond.

The final dilemma is visual quality. While the visuals for VR are currently quite good, they need to improve for the sake of immersion. One of the hurdles with real-time VR is that your scene needs to be optimized - lots of polygons and real-time lighting makes for a poor, lagging experience. In our case, the lighting was baked. We used high-end renderings and baked the lighting, which increased the efficiency of the model and guaranteed that the quality of lighting, so we didn't run into issues with frames dropping.

There are no walls

As with any new technology, the majority of the process is learned along the way. We had our concept first and thought of how we'd do it later, encountering everything from technical challenges to technological limitations. Our platform is still young, and we're hoping to one day have user-generated 360 videos uploaded and tied to hotspots across the city, so that people can experience the hotspots through others' eyes. Once we reach that point, anyone who tries our platform will be able to experience New York City from various perspectives and ultimately walk through walls, just like Garou.

Want to hear more from innovators taking the leap into the future of design visualization?

Find more stories like this one on Design in Motion.

Posted By
Published In
  • 3ds Max
  • Film & VFX
To post a comment please login or register