What It Takes to Become a Great Product Manager


What It Takes to Become a Great Product Manager by Julia Austin

Because I teach a course on product management at Harvard Business School, I am routinely asked “What is the role of a product manager?” The role of product manager (PM) is often referred to as the “CEO of the product.” I disagree because, as Martin Eriksson points out, “Product managers simply don’t have any direct authority over most of the things needed to make their products successful — from user and data research through design and development to marketing, sales, and support.” PMs are not the CEO of product, and their roles vary widely depending on a number of factors. So, what should you consider if you’re thinking of pursuing a PM role?

Aspiring PMs should consider three primary factors when evaluating a role: core competenciesemotional intelligence (EQ), and company fit. The best PMs I have worked with have mastered the core competencies, have a high EQ, and work for the right company for them. Beyond shipping new features on a regular cadence and keeping the peace between engineering and the design team, the best PMs create products with strong user adoption that have exponential revenue growth and perhaps even disrupt an industry.

https://hbr.org/2017/12/what-it-takes-to-become-a-great-product-manager

Distribution by Ben Horowitz

When I ask new entrepreneurs what their distribution model will be, I often get answers like: “I don’t want to hire any of those Rolex-wearing, BMW-driving, overly aggressive enterprise sales slimeballs, so we are going to distribute our product like Dropbox did.” In addition to taking stereotyping to a whole new level, this answer demonstrates a deep misunderstanding of how sales channels should be designed.

https://a16z.com/2017/06/09/distribution-model-sales-channels/

 

Matt Turck on AI: "Frontier AI: How far are we from artificial “general” intelligence, really?"

Matt Turck has a fascinating article on Artificial Intelligence with insights on the progress that has been made, and where we are right now. I found it a good read as it touched on various models being used to lead to "general" artificial intelligence.  It seems in the last year a lot of progress has been made in the hardware front that has led to gains but we may still be far off from the robots taking over.

http://mattturck.com/frontierai/

Some highlights

GANs, or “generative adversarial networks” is a much more recent method, directly related to unsupervised deep learning, pioneered by Ian Goodfellow in 2014, then a PhD student at University of Montreal.  GANs work by creating a rivalry between two neural nets, trained on the same data. One network (the generator) creates outputs (like photos) that are as realistic as possible; the other network (the discriminator) compares the photos against the data set it was trained on and tries to determine whether whether each photo is real or fake; the first network then adjusts its parameters for creating new images, and so and so forth.  GANs have had their own evolution, with multiple versions of GAN appearing just in 2017 (WGAN, BEGAN, CycleGan, Progressive GAN).

How close is AlphaZero from AGI? Demis Hassabis, the CEO of DeepMind, called AlphaZero’s playstyle “alien”, because it would sometime win with completely counterintuitive moves like sacrifices.  Seeing a computer program teach itself the most complex human games to a world-class level in a mere few hours is an unnerving experience that would appear close to a form of intelligence. One key counter-argument in the AI community is that AlphaZero is an impressive exercise in brute force:  AlphaZero was trained via self-play using 5,000 first generation TPUs and 64 second generation TPUs; once trained it ran on a single machine with 4 TPUs. In reinforcement learning, AI researchers point out that the AI has no idea what it is actually doing (like playing a game) and is limited to the specific constraints that it was given (the rules of the game).  Here is an interesting blog post disputing whether AlphaZero is a true scientific breakthrough

 

Transfer learning has been mostly challenging to make work – it works well when the tasks are closely related, but becomes much more complex beyond that. But this is a key area of focus for AI research. DeepMind made significant progress with its PathNet project (see a good overview here)