Product Management 2.0: Lessons from Google in Building AI Products in the Software 2.0 World

Published
19 min read

In November 2017, the Director of Artificial Intelligence (AI) at Tesla wrote a post on Medium stating that neural networks “represent the beginning of a fundamental shift in how we write software.” Rather than coding rules line by line for each module, neural networks are trained on labeled data which they use to discern their own rules, often finding previously unknown patterns. This change in software necessitates a new strategy for product management. As an early adopter of this pivotal technology, Google is the premier company to learn from in this area.

Building Internally, Sharing Globally

Google has been using machine learning to improve its products throughout its history. When Apoorv Saxena, a former McKinsey consultant and entrepreneur, joined Google in 2012, the company was already making use of deep learning, access to high computational power, vast datasets, and cloud-based storage. With these tools, it was achieving breakthrough results in domains like image understanding, speech recognition, and text translation.

“We started seeing some amazing improvement in our ability to make predictions on unstructured data and language understanding,” explains Saxena, Head of Product Management for AI Verticals. “And a lot of those dramatic changes and improvements were coming out of Google’s own research.”

One key decision Google made when incorporating machine learning, according to Saxena, was to not only publish the results of this research publicly, but to build internal machine learning tools that enable Google employees to build products. “This accelerated the process of applying machine learning to our core applications,” he explains. “I’ll give you a very personal example from when I was a product manager on the G Suite team. Every year, Google does a hack-a-thon. I thought, I’ll put together a team to create a bot. This was 2014, when bots were still early. We set out to create a simple bot that understands a Google Hangouts thread and can answer questions.

“The beauty was that we were able to do this in a 48-hour hack-a-thon using the tools that Google had internally built. If I’d been outside the company, the same task would have taken me at least six months, with worse quality. My hack-a-thon team won second prize and that was for me, personally, an aha moment––that such a powerful tool could be created so simply and easily.”

Saxena wasn’t the only one who saw the technology as a breakthrough––Google quickly set out to train tens of thousands of employees on the potential of machine learning. This training was also made publicly available, and the company built an open source deep learning framework, TensorFlow, to help democratize the adoption of AI.

The Software 2.0 Difference

As more companies incorporate AI algorithms, there is an increasing realization that building AI software (also called Software 2.0) differs significantly from traditional coding.

Creating traditional software relies upon establishing “if-then” rules. While this is doable, if cumbersome for certain applications––say, “if an email contains ‘Nigerian prince,’ it is spam”––it is nearly impossible to hardwire all the rules necessary for complex situations––say, self-driving cars or a Google Assistant that answers your questions.

Such complex products are now written using deep learning techniques that require a training data set (say, examples of spam and normal emails) to analyze patterns and establish rules. While this code is much simpler to write compared to traditional software, Saxena points out that there are no standardized tools and frameworks available to build Software 2.0 products—at least, not yet.

“The Software 2.0 stack is just emerging” he says. ”In our view, its evolution is going to be very similar to how mobile and web stacks standardized over time. Our hope is that TensorFlow is going to be one of the frameworks that gets standardized but there are still many missing pieces. For example, there is no standardized way to manage data scientists workflows for gathering, cleaning, and labeling training datasets. A set of tools needs to be created to truly drive widespread adoption of AI.”

Reinventing Product Management

Just as building Software 2.0 products requires new development tools and practices, product managers (PMs) overseeing these solutions need to reinvent their craft. Traditionally, product management starts with defining customer journeys. “You’re focused on how customers use your product,” explains Saxena. “How they are going to send or read or sort.” This means that the buildout to add new features is constant.

While the customer-centric approach is still relevant in product management 2.0, there’s a completely new set of skills, processes, and tools to apply. “For example, Google Assistant is a 2.0 product. But there’s no user interface in the traditional sense, such as the one you have in a mobile app with a touch screen,” says Saxena. “You as a product manager need to redefine your customer user journey for a conversational interface such as the one in Google Assistant. In this case, there is no set user journey that you optimize through UI artifacts to guide user behavior. Conversations are much more free-flowing and you need to modify your approach accordingly.

“You also need to redefine the concept of minimal viable product or features,” he continues. “To build a new feature in Google Assistant such as answering ‘How is the weather today,’ you need to start thinking about the minimal viable dataset you need to bootstrap your feature. In this case, it will be dataset of user utterances that all correspond to the user intent to find information about the local weather.”

The other big difference between traditional and new product management, according to Saxena, is designing the user experience to be fault-tolerant: “In software 1.0, when I write a code, I’m a hundred percent sure that if you ask me to send email, the code will always send email. In a Software 2.0 world, it’s essentially a model predicting with very high probability what are you asking it to do. So, if you ask a model trained to understand images of animals whether an image is of dog or cat, the model is never a hundred percent sure it is either.”

The question for product managers becomes how do you handle failure when the AI model gets its prediction incorrect? One way is to have the model return a percentage of certainty. Another is to suggest alternate responses, or still another way is to provide reasons for the model prediction—Saxena calls this “graceful failing.”

Finally, PMs must be aware that training datasets can hold hidden biases. For example, an algorithm to identify CEO candidates may deliver more men than women. Saxena asserts, “As a product manager, if you don’t have an understanding of the biases that can exist in your training data and your model, you will design an inferior product. It is the job of product manager to ensure that for model prediction is fair. This is something most traditional PMs never had to worry about.” Google is proactively providing tools for product managers to better understand such biases. In the recent release of the Google Contact Center AI and AutoML products, Google released a set of AI principles and guidelines for product managers to follow to ensure model fairness.

All the above also implies that the profile of a product manager hire changes a bit, too. In fact, when hiring PMs, Saxena looks for high levels of empathy. “AI is powerful tool. With great power comes great responsibility, and we have to be very careful about the technology we are building,” he explains. “We need people who can take the long view and understand that there are some very valid ethical dilemmas in how the power of AI is used.”

Strategic Partnerships

As much as the Google team accomplishes, it is not done in a vacuum. Saxena notes that “a whole new AI ecosystem is evolving,” as the company partners with companies like Nvidia and Intel to build specialized hardware, Kubernetes to optimize computing infrastructure, and Figure Eight to create data labels.

“We want to democratize access to AI,” Saxena explains, “This means contributing to the open source movement and working with partners to enable capabilities across the whole stack.”

“AI is a very generic set of tools,” he continues. “The big value we believe our partners can provide is taking the tools we are building and customizing them to a particular use case, or a particular vertical.

“AI is going to be transformative across every industry,” he concludes. “Almost every week, I get pleasantly surprised by the novel ways people are using the tools we have built.”

Note: Since this interview, Apoorv Saxena has transitioned to Global Head of AI at JPMorgan Chase & Co.

5 Under-the-Radar Feats of Artificial IntelligencePrevious Article5 Under-the-Radar Feats of Artificial Intelligence New Adventures in AI: A Time Machine RecapNext ArticleNew Adventures in AI: A Time Machine Recap