Q&A with Adobe’s Marc Levoy following his election to the National Academy of Engineering

Photo of Marc Levoy.

Image source: Linda A. Cicero / Stanford News Service.

Adobe’s own VP and Fellow, Marc Levoy, was recently elected to the National Academy of Engineering and recognized for his work in computer graphics and digital/computational photography. We had a chance to talk with Marc about this award, his thoughts on his innovation journey throughout his career in the tech industry and much more.

National Academy of Engineering logo.

Congratulations on being elected to the National Academy of Engineering for your contributions to computer graphics and computational photography! What does this recognition mean to you?

I spent most of my career in Stanford's Department of Computer Science (not Engineering!) And like most computer scientists I published most of my papers in peer-reviewed scientific journals. To be recognized by the National Academy of Engineering (not Science) means you've done something that changed the practice of computing, or an industry that depends on computing — in my case digital photography. Most of the accolades for my work at Google came through coverage of Pixel phones by the tech press, and some industry awards. Not all computer scientists value such crossovers from academia to industry. That my academic colleagues noticed this, and considered it worthy of recognition, feels good.

Reflecting back on your career, is there a particular moment or project that you're most proud of or feel was most impactful?

For an academic, getting up in the morning and checking to see who has cited your latest paper is a big deal. During the years I was at Google, this meant looking for great photos people had taken using my team's computational photography features, and posted to social media. This was especially true for Night Sight, which let casual photographers take pictures in very low light. We were enabling millions of people to be creative in a way they couldn't before. I was particularly moved when someone posted a precious picture of their child asleep in a crib in a darkened room, with the caption, "What sort of sorcery is this Google?" That tweet meant a lot to me. And now that I’m here at Adobe, I’d like to continue creating photographic experiences that astonish people. As Arthur C. Clarke wrote 60 years, “Any sufficiently advanced technology is indistinguishable from magic.” I want to be accused of sorcery again!

In your view, what's the next big thing in computational photography?

I would say computational video. For every magical new photographic experience feature companies have introduced on smartphones, there is an analogous feature one can imagine for video. Until recently, such features were too expensive to compute on a phone and hard to control. Thus, they were mostly the purview of special effects studios, with everything being done "in post", meaning during editing. With more powerful mobile processors, those constraints are beginning to melt away, and it couldn't be more exciting. Adobe has created the most powerful video editing software apps in the industry (mainly Premiere and After Effects), but has been reluctant so far to enter the video recording space. As mobile cameras get better, this becomes a natural next step. Videography, like photography, should be a real-time collaboration between the photography and his or her camera, with algorithms and AI playing a mediating role.

Switching gears, you joined Adobe in the middle of 2020, what are you working on these days?

At Google my goal was to democratize good photography. At Adobe my goal is to democratize creative photography. Adobe is an attractive place to do this, because it caters to people who are trying to take their photography to the next level, and are therefore willing to spend a bit longer composing and capturing a picture. My team is working on some exciting projects, but for now I’ll have to keep things under wrap. Stay tuned!

Given the state of the workplace since you joined Adobe, where are you working these days?

Mostly at home, but I try to spend at least one day per week in the office, and most of my team joins me on that day. Some things are best done in person — like brainstorming at the whiteboard. It fosters more give and take, and more creativity. Other things can't be done remotely at all, like huddling around a monitor to argue about the right color for a sunset. We do a lot of that on my team, and you can't do it over video chat.

You actively encourage your team to publish research. Why is that, and what are some of the exciting areas they are exploring these days?

I encourage my team to publish because we’re trying to invent and ship features we hope the competition hasn’t thought of yet. At Google and now here at Adobe, I've tried to hire mostly PhD superstars. They're smart, they're creative, and they think of things that others haven't. But these folks want to be recognized for what they've invented, and they want to talk about them at conferences, and get feedback from their peers. In short, they want to be part of a research community. To attract this caliber of people, I need to let them publish. Industrial Light and Magic and Pixar under Ed Catmull used the same strategy. In fact, I learned it from him.

Does this strategy let competitors catch up faster? Sometimes yes, and this is arguably why Apple's smartphone photography got good so quickly over the last 3 years. How can a team that publishes respond to this threat? Perhaps delay publication a bit. Otherwise, run faster and breathe deeper. Invent more cool stuff. (By the way, patents seldom work, at least not in computer software.)

It's still early days for my team at Adobe, and we've only recently submitted our first raft of papers for publication. Since the process of reviewing papers is supposed to be double-blind, meaning that authors don't know who is reviewing their paper, and reviewers don't know who submitted the paper they are reviewing, I probably shouldn't talk about what we have submitted, at least not during the review process.

As a team pushing boundaries and working on things that have never been done before, the Emerging Products Group (EPG) at Adobe is basically a startup within a large company. What's that like?

"Incubator" might be a more accurate description, since we don't need to make pitches to investors. And enriching ourselves is not a primary motivation. We work mostly out of passion, and for the thrill of success. This strategy has pros and cons. It gives us autonomy, and agility. Said another way, we can make our own choices on features and technologies, and we can pivot fast. The biggest danger of being in an incubator is isolation. If we want to integrate with another app, or convince them to build a technology for us, it takes some convincing. We frequently end up rolling our own. That is obviously less efficient, but it lets us rethink the problem we're solving, and that often leads to new ideas.

Smartphones have already revolutionized photography, and AI is poised to do the same. What are some of the AI breakthroughs that have changed photography?

AI has been improving photography for longer than most people realize. All cameras, even SLRs, have face detectors, which they use when focusing. (By the way, face detection is different than face recognition. Most cameras don't know or care who you are — they just want to find your eyes, make sure they are in sharp focus.). The best face detectors use AI. Hopefully, they've trained that AI on a diversity of face types and colors.

Another AI success story is white balancing. Deciding how a scene was illuminated, and partly correcting for strongly colored illumination, is what mathematicians call an ill-posed problem. Is that park bench yellow because it was painted yellow, or because it was painted white, but is being illuminated by a yellow sodium vapor streetlamp? Until 5 years ago white balancing was solved mainly by seat-of-the-pants heuristics. As part of our paper about Night Sight at Google we described an AI-based white balancing algorithm. It worked well. There are undoubtedly other cameras that use AI-based white balancing. It's a big success story for AI in photography.

In other areas, AI is increasingly being used to detect skies, and apply special processing to them (color sweetening, denoising, and maybe darkening or lightening, depending on the time of day). Many smartphones use AI to classify scene type, so that pictures of food are processed in a way that makes the food look appetizing. AI is also used to estimate depth maps in many phones, which helps them defocus backgrounds for portraits. Several companies are working on AI-based relighting of portraits, although so far with mixed results. Adobe is pushing the boundary in this area with its Sensei-powered neural filters in Photoshop, but relighting is still a hard problem.

How do you balance technology that improves photography with an artist's creativity and individual expression?

There's a myth in photography of the "straight photograph". Maybe the myth grew out of Ansel Adams and the f/64 club he founded in 1932. Similarly, cameras often have a processing option called "Natural". But here's no such thing as a straight photograph, or "natural" processing. The world has higher dynamic range (the brightness difference between darks and lights) than a photograph can reproduce. And our eyes are adaptive sensing engines. What we think we see depends on what's around us in the scene — that's why optical illusions work.

As a result, any digital processing system adjusts the colors and tones it records, and these adjustments are inevitably partly subjective. I was the primary "tastemaker" for Pixel phones for several years. I liked the paintings of Caravaggio, so Pixel 2 through 4 had a dark, contrasty look. Apple certainly has tastemakers — I know some of them.

The key to artistic creativity lies in having control over the image. Traditionally this happens after the picture is captured. Adobe built a company on this premise. If you capture RAW, you typically have more control, so Adobe Lightroom specializes in reading RAW files (including its own DNG format).

What's exciting about computational photography is that — far from taking control away from the artist, it can give artists *more* control, and at the point of capture. Pixel's Dual Exposure Controls are one example of this — separate controls for highlight and shadows, rather than a single control for exposure compensation. Apple's Photographic Styles, which are live in the viewfinder, are another example. This is just the tip of the iceberg. We'll start seeing more controls, and more opportunity for artistic expression, in cameras. I can't wait!

You have several open roles right now. Can you tell us what makes these opportunities so exciting, and where people can go to learn more?

Let me return to the point I made earlier about democratizing creative photography. More than one reviewer of recent Google and Apple phones have noted that while they both include the word "Pro", they don't offer photography pros any real control over the camera. Samsung phones do offer manual controls, but as soon as you invoke them, you drop back from burst-mode computational photography to single-frame captures, which look noisy. Nobody has yet married pro controls to computational photography image processing pipelines. What better place to do this than Adobe, who pioneered tools for pro photographers?

Back to your question. If you have a background in computer graphics, computer vision, or machine learning, and a passion to bring your scientific knowledge to shipping products, send us an email. We're looking for junior and senior computer scientists, engineers, and product managers, with or without PhDs, who have a passion for photography, a startup mentality, and the courage to pivot fast and invent solutions as they go along. Here is one of our job postings.

Looking back on your career, what was the best piece of advice you received from a mentor?

When I applied for faculty positions, I was invited to interview at Stanford. Like most people, I have an imposter syndrome. When I got an offer from Stanford, my first thought was: What are they thinking? I'm terrible at math. I can't get tenure there!

To help me decide I called Don Greenberg, my first academic mentor, from Cornell. He told me, rather bluntly as I recall, that I was acting like a coward. He said that if I didn't take the job at Stanford, I would spend the rest of my life wondering, "What if?" It was the best advice I ever got. I swallowed my fears and accepted Stanford's offer. That was 30-odd years ago, and it worked out ok.

I still have an imposter syndrome, and I still can't do math, but every once in a while I stumble across a decent idea others haven't thought of. And I like teaching. These things turned out to be more important than I thought.

For anyone starting their career in the tech industry right now, what would be your best piece of advice?

Get the best education you can, at the best school you can get into. Not because better schools give you better connections, but because better schools are more likely to put you in front of professors who have invented cool stuff and changed the world. You want them as role models.

Take courses outside your major and develop broad interests. I studied Architecture in school, and the undergraduate course project I'm proudest of was a paper about Michelangelo. 25 years later I took 30 students with me to Florence for a year, where we digitized the statues of Michelangelo using laser scanners. How could a computer science professor know enough about art to undertake such a weird project? Because I studied it in school.

Another question I frequently get is — Should I go for a PhD? A doctoral program lets you work on one thing for 5 years, hopefully making a contribution to human knowledge, and it proves to future employers that you can initiate and manage a project yourself. If you want to enter academia a PhD is essential. But 5 years is a long time out of your life, and for many exciting careers in the tech industry it's not really necessary.

Make time for internships in your industry. Whatever degree you pursue, take the time for internships every summer, and not necessarily at the same company every summer. Use school and these internships, to broaden your education. There's time to specialize later.

Make sure you’re having fun. If you're not, then make a change. As Steve Jobs said in a famous commencement address at Stanford in 2005, "Your time is limited, so don't waste it living someone else's life.” It is excellent advice. I have lived by it. So should you.