One of the things I’ve always been obsessed with is how technology changes the way people make movies. Sometimes the change is monumental, like the introductions of sound or color. Other times it’s something a casual moviegoer might not see, like a new type of digital camera or a better steadicam. But something I’m confident every person has noticed is the way Pixar movies have changed over the years. While the company has steadily made film after film with amazing and unique stories, they’ve also constantly pushed the boundaries of incorporating technology. Need a reminder? Check out an image from the original Toy Story and then look at an image from Finding Dory. It’s like they were made by two different companies. And in a sense, that’s true.
If you’re not aware, RenderMan has been Pixar’s core rendering technology for over 25 years. It’s the software that’s helped bring Buzz Lightyear and every other Pixar creation to life, along with countless other characters and creatures in such films as Terminator 2: Judgment Day, Jurassic Park, Avatar, Titanic, the Star Wars prequels, and The Lord of the Rings. In fact, RenderMan was such an achievement, the Academy of Motion Picture Arts and Sciences’ Board of Governors honored Ed Catmull, Loren Carpenter and Rob Cook with an Academy Award of Merit “for significant advancements to the field of motion picture rendering as exemplified in Pixar’s RenderMan.” It was the first Oscar awarded for a software product.
While RenderMan is an amazing product, like every piece of software, it needs an update every so often. So over the past few years, Pixar has been hard at work developing a new version, and when you see Andrew Stanton’s Finding Dory, (arriving in theaters June 17th) you’ll get to see the first film to use that upgraded software.
Recently, Disney invited us – along with a number of other reporters – to visit the Monterey Bay Aquarium in Northern California to get an early look at the film and talk with the people responsible for bringing Pixar’s latest creation to life. Steve May, Pixar’s Chief Technology Officer, explained some of the benefits of the new version during a group presentation. As an example: previously the company could do direct light, but not indirect light under RenderMan. With the new version, indirect light happens automatically, allowing lighters to use their time to help shape the light. Another way the new version has helped is the way water looks real. May went on to explain that the shots in Finding Dory could have never been done on Finding Nemo.
Later in the day, I sat down with Steve May for an exclusive interview. We talked about the new version of RenderMan, his thoughts on VR and how Pixar might incorporate the technology, as well as upcoming technical hurdles they’re trying to overcome, and a lot more. If you’ve ever been curious about how Pixar uses technology to help shape their films, this interview is for you.
Collider: You guys launched a new version of this RenderMan. Talk a little bit about what was the most daunting of this new technology?
Steve May: It really was the sum of the parts right. The changing the renderer, changing the tool to the lighters interface to the renderer and changing the underlying file format that we used the USD. Actually the combination of changing the pipeline at once was most daunting. Any one part of those things can be a big deal on a film and causes enough congestion and difficulty but it felt like the right thing to do. They all kind of felt interconnected to each other and it felt like we knew where we wanted to go, we knew we wanted to use each one of those technologies and the biggest leap of faith was just going for it and do all these things at once the reason that it was traumatic was because it changed a lot of the shot production pipeline. The movies are hard enough to make as they are and if you change the tools on roughly half of the artists on a show no matter how good they are it’s going to be disruptive and difficult. That was really the biggest thing, the sum of all those things.
How long had you been working towards releasing this new RenderMan. You had used the old software for 25 years.
May: The development of it? About 4 years
Was there a tool in that tool box that you couldn’t do? Maybe because you had the release or maybe it was based on the technology isn’t ready yet for RenderMan 3.0?
May: Not totally sure I understand your question.
I mean with the new software. I would imagine you weren’t able to put everything in the new software that you were looking to do. Or are there some things that will be in the next firmware update?
May: No, we were not able to do everything. It is enough to make a feature film but there is tons of work that still needs to happen. I can give you a tangible example of that. One of the things that we want to do at a high level is be able to artistically control these physically based simulations so now you have got these super, rich, great simulations of light giving you very realistic results but as an animation studio we don’t have to make things look real, in fact we don’t want to most of the time. Everyone levers the fact that we can caricature them to actually help tell the story. We made Up and Carl, he was boxed in by his world and we designed him to be a box. His head is square, his fingernails and age spots are square and we do all those kinds of things just to tell the story.
So with the software the things we are working on now is if an artist says that’s great I am getting all this magic stuff for free but I want to bend it. I don’t want it ot look real. The simulation is saying this is a pale blue, I want it to be a warmer color instead and it’s not physical, it’s just artistically that’s what we want to do so that is a big area that we are working on. Another one is just performance. It is never going to be fast enough. So another thing we are working being in the next release is faster volumes. You can render large cloudscapes and large effects, simulations and do it much more efficiently. And we are always looking for ease of use. RenderMan is a very powerful tool. But the power came with a soundboard of knobs and switches and so the challenge that we are embracing is How do we make this simpler to use so that artist don’t have to be so technical in order to get a lot out of it. There’s lots of things to do.
I am very excited about VR. I’ve used Oculus Rift and a few other technologies and really think it’s a revolution in terms of what people are going to be able to look at and do. How is Pixar looking at the VR technology and maybe thinking about ways to incorporate it?
May: I agree, I think that VR is going to be like smartphones and internet, it’s going to be life changing probably. For us there are two ways that we are looking at it right now. We are dabbling, we are doing experiments, we are not doing big projects or anything with VR. One, is it a new medium for telling stories and I think the answer is probably yes. But we kind of don’t know what yet. It is definitely unclear how you do narrative storytelling with VR but there is definitely potential there. The other way we can look at that is just as a tool for making traditional two dimensional films that we make now. How can we use VR for explorations of sets and location scouts. So that is the 3D drawing tool. You’re painting in 3D with a palate and this is really fun to look at in 3D space to be painting out shapes and use it and they kind of just start building, sketching out sets with it or just trying ideas you know and maybe there are ways that we use that to actually quickly create content or try ideas or whatever. For me those are the two that we have been looking at.
I think with VR, and this is just my own take, that depending on the level of submersion will dictate how long you can submerge. For example if you were doing something intense you maybe don’t want to go for more than 5 minutes and if it’s walking around a museum where it’s more passive, then maybe you can do an hour or two and not notice. But it absolutely is a thing for storytelling, I just don’t know what that is yet. My question for VR though is a lot of times there are environments created in the computer whether it be the restaurant in Ratatouille or say you were creating this village for some movie. You ever think that could be released as something that people could walk through if you have already done the leg work on design?
May: Yeah, We have only been doing experiments, to caveat anything I have said but John Lasseter has said, I always want to make worlds that people want to go see. That they want to go visit, they want to go live in that world. So like Cars, I want people to go want to go see Radiator Springs and it seems like that would be something that we can easily do. The performance is still not high enough to do really detailed kinds of things. We really want to give you a Cars environment, but we’ll get there. And even now it’s not too bad. So I think that would be interesting.
What’s the next technical hurdle that you guys have been really trying to overcome for a while that maybe is now actually on the horizon and that you can get there?
May: Probably on the rendering front. I showed that demo today, you can move a light and you get this noisy image that just starts filling in. We’re close using GPU’s mainly to having that be instantaneous. So that you have real time high quality path tracing and we are actually on the cusp of being able to do that. Giving us this high quality rendering but doing it so that it’s more instantaneous for the artist that’s using it.
One of the things that I saw you talking about was, and I noticed it today that RenderMan was able to render pretty quick when you make a change it was re rendering right there when you made the change. So my question is do you think it will ever get to a place where it’s almost instantaneous done?
May: That’s what I am talking about; we’re getting close actually using GPU to be able to deliver that where it’s instantaneous.
That used to take over night, right?
May: Yeah, at best minutes. Sometimes overnight. So we are doing some work right now with shading where you can build the material appearance of the objects and we retrace it in the GPU and it’s instantaneous and you don’t see that kind of noisy thing when it comes in. It just comes in immediately. It’s pretty cool. And that can eventually could as I mentioned in VR applications because the game engines can give you realistic results to a certain level. To really get sophisticated realistic environments you need to do light transport, and to do that you need light tracing and to do that you need to do that at 120 frames per second. We are not that far from it.
Here are some of our recent Finding Dory articles:
- ‘Finding Dory’ Director Andrew Stanton and Producer Lindsey Collins Reveal How the Sequel Got Made
- First ‘Finding Dory’ Clip Rides the Current With Crush the Turtle One More Time
- ‘Finding Dory’: New Trailer Reveals the Pixar Sequel’s Colorful Newcomers
- New ‘Finding Dory’ Images and Concept Art Go Behind the Scenes of Pixar’s Sea-quel