Search
Close this search box.

Sameer Qureshi

In Conversation

Vol. 15 // 2020

We ask Sameer Qureshi, Director of Product (Software), Level 5, Self-Driving Division at Lyft, about the future of the automobile industry.

COGNITIVE TIMES: What has your career journey looked like so far? Could you tell us a little bit about how you got to Lyft?

SAMEER QURESHI: I’ve always been a software person. I went to Carnegie Mellon for a Bachelors in Computer Science, and to Stanford for a Masters in Computer Science. And I’ve always wanted to work on software that is very close to hardware (think operating systems, device drivers, robots, etc.). So I spent two years at IBM working in the AIX Core Kernel Team (AIX is IBM’s variant of Unix). I also spent 8+ years at Intel working on device drivers for Intel’s GPUs, specifically focusing on video – and along the way, I worked with Apple to help them enable what became FaceTime on the Mac, leveraging Intel’s GPU capabilities.

Having spent many years working with hardware, I wanted a new challenge that wasn’t directly working with silicon, but still had some aspect of hardware. So I decided to join Tesla to work on the Model X, with its iconic Falcon Doors. And until I joined Tesla, I didn’t realize how much software goes into one of their cars. After taking Model X through launch, production ramp, and developing its various variants, I was asked to help the Autopilot Team with the Autopilot 2.0 launch – which at the time was a gargantuan task, given multiple changes in Autopilot leadership, and the immense time pressure we were in to bring Autopilot 2.0 up to parity with Autopilot 1.0.

Finally, after having spent 3.5 years at Tesla, I decided to look for a different opportunity, but stay within the Autonomous Vehicle industry. My belief is that the very first deployment of AVs will be for rideshare. Given that, Lyft was an obvious choice. They already had a large ridesharing network, were actively trying to bootstrap Lyft Level 5 (the AV division), while also building a solid team.

CT: There are so many competing approaches to self-driving cars; the Waymo approach, the Tesla approach, the GM approach, etc. What are the key differences between these approaches and which do you think has the best chance of working?

SQ: The good thing about all these approaches is that they all lead to self-driving cars. And then all these self-driving cars get deployed on rideshare networks! The best approach is one that enables AVs to be deployed at scale on rideshare networks, which is why Lyft has a two-pronged approach to self-driving. We have our Level 5 division which is focused on building the tech for AVs in-house, and an Open Platform team that partners with companies such as Waymo and Motional to get their AVs on our network.

Tesla and Waymo have significantly different approaches towards solving the problem, but that is driven by the fact that Tesla already has more than a million cars on roads worldwide, that need to be updated via over-the-air software updates for improved functionality and increased capabilities. Waymo, on the other hand, is building an AV from the ground up, and outfitting their cars with a lot more sensors and compute – but designing the car from a driverless perspective up front.

Both approaches involve significant amounts of machine learning techniques, that are designed to enable various features of the AV stack. Both approaches also have different pros or cons and different challenges. But the jury is still out on which approach has the best chance of working. If the industry knew what the correct answer was, they would all be following the same approach!

Images courtesy of Lyft

CT: In your view, when will more than 50% of the cars in the U.S. be self-driving capable?

SQ: This is still going to be many years out. Humans are inherently very unpredictable. I had co-authored a blog post that explained some of the challenges, and why AVs are hard. If all cars could become self-driving cars overnight with a snap of my fingers, the problem would be a lot easier. But a world where both human-driven cars and AVs need to co-exist makes things harder.

Having said that, we’re already seeing Waymo offer some driverless rides in Arizona, and there are numerous YouTube videos of awestruck passengers (and bystanders). And Cruise has stated that they will be starting driverless rides in San Francisco by the end of the year. If this momentum continues, hopefully we will see AVs being deployed in more cities in the coming years.

CT: How do self-driving vehicles stand up to electric vehicles? Which is a bigger deal?

SQ: They are both trying to solve different problems, but are somewhat related. EVs are trying to solve sustainable transportation and push us towards a world where there are no carbon emissions generated by hundreds of millions of combustion engine vehicles worldwide. The vision is for all vehicles to be electric powered, fueled by clean energy sources, such as wind and solar. And we’re already seeing all major automotive companies worldwide selling EVs, Plug-in Hybrids, etc. Here at Lyft we just announced – in collaboration with the Environmental Defense Fund – our commitment to reach 100% electric vehicles on the Lyft platform by 2030. By working with drivers to transition to electric vehicles, we have the potential to avoid tens of millions of metric tons of GHG emissions to the atmosphere and to reduce gasoline consumption by more than a billion gallons over the next decade. This includes cars in the Express Drive rental car partner program for rideshare drivers, our consumer rental car program for riders, our autonomous vehicle program, and drivers’ personal cars used on the Lyft platform.

AVs are trying to solve multiple problems. Safety is a huge one. Nearly 40,000 people die in traffic accidents in the US alone. Convenience is another big problem. Imagine you could read, sleep, eat, or work while your car drives you from work to home, or safely takes your children to their little league baseball game, or even provides mobility to blind and handicapped people. And lastly, the ridesharing industry would radically change. Combining ridesharing with AVs would put a significant dent in car ownership. If you could have access to a car nearly instantly that was cheap, safe, convenient, and only you and your family were the passengers, why would you ever want to buy your own car?

The two are, however, related. AVs have a prerequisite of being able to tap into a big power source. The standard 12-volt battery in a combustion engine car is not big enough to power all the hardware than an AV requires. Hence, all current AV platforms are based on hybrids or EVs – that allow the self-driving system (SDS) to draw power from the large electric battery.

CT: What type of machine learning is most effectively employed for autonomous vehicle applications?

SQ: There are surprisingly multiple components within the SDS that leverage machine learning techniques. Perception is the part of the SDS that is responsible for determining what the AV “sees” around itself. The Perception system is designed to recognize most objects around the AV (vehicles, bicyclists, pedestrians, traffic lights, etc.). Perception is historically where most of the ML is used to train and teach the AV to recognize things.

Then there are the Prediction and Planning subsystems of the AV stack. These were not as ML-heavy in the past, but the industry is shifting to leveraging more ML for these subsystems too. Both are described briefly in the blog post I also mentioned above — and in fact, I also talk about how we use a combination of rule-based systems, and learned systems as we build the Prediction and Planning subsystems. We have also recently talked about how we’re using machine learning to try and solve Motion Planning, which will pave the way for us to leverage data-driven techniques to solve autonomy.

At a high level, we end up using various ML techniques – some that involve large scale labeling and training, some that rely on the output of another ML subsystem, some that are purely trained in an unsupervised manner using large amounts of data, and some that are powered by the data we collect from our ridesharing network. As we train our ML algorithms, we ensure that we’re hitting the metrics and results we want (and are improving on them), while making sure that the datasets we use for our training are constantly updated and refreshed to cover any failure conditions we might have seen.

Images courtesy of Lyft

CT: What is your advice to young engineers looking to learn more about vehicular autonomy. Where do they start?

SQ: Vehicular autonomy will remain an exciting industry for many years to come – and there are many challenges and problems that will need to be solved, and hence there will be many opportunities for innovation, research, and breakthroughs along the way. Some have even called AVs one of the most difficult (and unsolved) problems in Computer Science.

My advice to young engineers would be to get involved in robotics, at school or at home. Having an understanding of how robots work, along with the understanding of how machine learning works – would be the ideal combination for someone wanting to learn more about AVs. In fact, I took a few robotics classes in my undergrad – which were my favorite classes at CMU.

Although a bit expensive, LEGO has an excellent product line called LEGO MINDSTORMS that teaches young engineers how to build and program simple robots. They even sell various different sensors that can be used with their kits. Additionally, Coursera, Udemy, and Udacity have online classes for machine learning.

CT: So when do I get my completely safe, autonomous self-driving car?

SQ: It’s not a matter of when, but where you’ll be able to get into a self-driving car. Currently, Lyft has partnerships with Waymo (in Phoenix, AZ) and Motional (in Las Vegas, NV) where Lyft riders can already request self-driving (with safety drivers) rides. For complete self-driving cars, what we can tell you is that this will take a long time as we incrementally get riders used to self-driving cars. So, when you think about how AV’s are going to be deployed on the Lyft network, it will be first based on what rides an AV can or can’t accomplish. At first, AV’s will only be able to perform a small percentage of all rides. AV’s won’t be able to drive in inclement weather, so we’re going to need drivers to be able to service the large percentage of trips that AV’s aren’t able to perform in the beginning.

Lyft has an important role to play to get people to try out self-driving cars. We believe the key to transitioning self-driving cars from the development stage to public offering is a rideshare network. By bringing AVs to market through rideshare, and complemented by human drivers, we ensure riders have reliable transportation options as they begin to try self-driving rides. Introducing self-driving vehicles into the mainstream via a rideshare network is a seamless way to encourage broad adoption. It will be some time before AVs are deployed commercially at scale and it’s important to note that rideshare only accounts for less than 1% of all Vehicle Miles Travelled (VMT) today, so we’ll still need drivers even as that number grows and even when AV’s are in play.

Related Articles

Turing Talk
Article