Your cart is currently empty!
Nishant Bhajaria, Director of Privacy Engineering, Architecture and Analytics, Uber: Building a Platform That Users Can Trust
Nishant Bhajaria knows a thing or two about building trustable platforms. Bhajaria, Director of Privacy Engineering, Architecture and Analytics at Uber, leads the engineering team that was responsible for developing privacy tools like the platform that enables Uber riders to view how many Uber drivers liked or disliked them. The team also builds other data protection tooling, which is a key driver for the company’s privacy strategy.
Building trustable platforms has never been more important, especially for public-facing companies. Social media has the power to strengthen or demolish a company’s brand reputation, customer loyalty, revenue growth and profitability. As a recent Harvard Business Review article points out, the security and trustworthiness of a digital platform, the quality of the digital user experience, the extent to which users trust a digital environment and the degree to which users actually utilize the digital tools made available to them each play an impact on user trust.
HMG Strategy recently caught up with Bhajaria, the author of ‘Data Privacy: A Runbook for Engineers,’ educator and animal rights activist who is committed to the welfare and preservation of elephants, to explore the factors that go towards creating platforms that users trust, along with the critical role that data plays in developing trustable platforms.
HMG Strategy: How do you build a platform that users trust? What are the starting points?
Nishant Bhajaria: I think the most ideal starting point for any platform would be to start at the very beginning. Be it infrastructure, be it data, be it the endpoints, be it any sort of digital opportunity. You want to start when the size of the risk is at its smallest. For example: how do you make sure that you get monitoring and alignment capabilities at the outset? What typically happens is that companies focus on trust when there is an audit or a major media story or when there is a brand reputation problem or if you need to get into a new market.
The challenge with fostering trust at that point is that it all gets done in a rush with an incomplete understanding of the facts. It’s like that old joke about why the man was looking for his keys under the streetlight. Because that’s the only place he could see.
If we’re talking about building trusted platforms, you want to make sure at the engineering level, at the planning level, at the marketing and business and product-side level, the need to identify these opportunities and to build trust in each of these components as early as possible, so that when they inevitably coalesce and work together. Doing so provides you with an opportunity to keep improving things rather than having to bolt things on.
What are some additional elements for building trust that executives should be mindful of?
NB: Companies are under scrutiny like never before. Whether it’s enterprise customers or in a B2C model, this hi-trust goal should apply not just to the engineering side of the house but also on how the business makes decisions. For executives, giving that consistent message out is critical.
A lot of companies have grown based on silos, so it is critical that they connect these silos. That becomes necessary as companies grow anyway, so trust and safety can become the forcing function for organizational improvement. For executives, being a little more humble and having that trust focus happen across the company, rather than just depending on engineering to back it in, is critical.
What are some recommendations for developing a platform that’s designed well for users? How can we best capture their interest and address their needs and preferences as we go about these efforts?
NB: The answer always lies in the data. Post Steve Jobs, when tech companies have built products, we have essentially made the customers adapt. Or if the customers didn’t like it, then we come back and change things. This agile model has served us well.
As a next step, we should use incremental data insights alongside the scrum-driven approach; that means releasing features and functionality more frequently in a more bite-size format and quickly collecting feedback. We have the capabilities around data analysis now that, as an industry, we never had even five years ago.
We have the capability to monitor customer behavior in a very anonymized fashion that tells us how people react to things. Having those insights and quickly making changes based on those insights on an ongoing basis across the board is critical. To have all that data and then to not use it to benefit customers and to build trust is a bit of a missed opportunity. When engineers, product managers and business leaders plan things, let’s bring the data folks in the room as well because they know what the customer likes because they have the actual raw data and the inferences from it.
Earlier in the conversation, you talked about trust. How do we go about designing platforms that are trustable? What should we be thinking about, both as development teams and as the leaders behind these efforts?
NB: I would say giving customer visibility and then control, in that order, is critical. When you create a tool that lets customers know what you collect from them, how you use that data, here is how the tooling behind the scenes is, and you then enable them to participate in that process, that is the way to go. Now, the danger is — and I’m sure that’s what the biggest learnings people are going to take from the Facebook earnings reports from last week when their stock fell by 25% — is that when people are provided with choices, they will inevitably opt out. That’s the immediate reaction people have.
That may be true in the beginning, so things may get worse before they get better. But what happens is right now, if people are participating in this tech industry product outlay, they are doing so without the right context. The quality of engagement may not be great. You may have a platform that hosts news that is not accurate. So, yes, there are high levels of engagement right now, but it is uninformed and therefore unproductive engagement.
It is a bit of risk to take, but a good one in my opinion, to give people visibility into what happens behind the scenes from a data collection and processing perspective. After the initial loss of customers, you will probably get a second batch of customers that will be a) more informed, b) more engaged and c) more loyal from a platform perspective. That engagement will lead to better advertising, better growth, better personalization. My advice would be to take a small hit now so as to prevent a much bigger hit later.
How should we be thinking about the use and application of data in designing a trustable platform?
NB: Let me go back a few decades. It’s not often that an engineer uses Ralph Nader as an example, but I’m going to go ahead and do that right now. There was a time when automobile companies were not huge fans of adding seat belts, air bags or other safety devices. It was supposed to be a cost sink. It was supposed to slow down manufacturing. But look at ads by carmakers today. They talk about those safety features as differentiators. I was T-boned in a car accident three years ago and, if not for those safety features, I would probably not be here having this conversation with you.
Just as automobile safety has become a selling feature, you can also build trust into the company’s core product. As an example, we do XYZ to protect your data so it doesn’t get breached like Equifax was. This is how we protect access management to data so that the Colonial Pipeline issue is not repeated again.
If you can associate your trust investments with bad stories that people already know about, I think that’s a win to be had. Law & Order was a TV sitcom in the ‘90s and the early 2000s that was ripped straight from the headlines. I think what we need to do is jazz up this trust conversation a bit to say, ‘This stuff you read about, it happens because people weren’t focused on trust. Here’s what we are doing to optimize for trust.’ That message — especially if it’s directly relevant to the news of the day — will create attention among customers in a way that they don’t pay attention to right now.
Right now, people often think about trust as something that is rebuilt after the fact, after something is breached, after some harm is created. Why not preempt that conversation and say, ‘Hey, this happened to somebody else, we’re making this investment so that it doesn’t happen to us’ and create that relationship going forward.
How should we be thinking about privacy in protecting customer data to help build trust?
NB: Privacy should be thought of as a Trojan Horse. Somebody’s going to sign for it at the door and you can let them in the house. But unlike the historical Trojan Horse, what privacy lets you do is overcome some of the downsides of how innovation actually happens.
When I was an engineer back in the day, innovation was very, very top down. My manager’s OKRs became my OKRs — pretty simple. But this also led to the slow, languid waterfall model where things didn’t work as intended quite often, and it was too late to fix it because it was six months’ worth of work.
So, we pivoted in the opposite direction to give engineers all the power in the world. Let’s give them their tech stack, a DevOps pipeline, and that’ll lead to amazing products, which it has, and that will lead to failing first and fast and recovering quickly, which companies have optimized for. But what that also means is that mishaps happen because data is not properly managed, device management isn’t properly tracked. Those sorts of downsides have just now come to bear after all the fruits of that innovative, bottom-up approach have been visible for years now.
What privacy forces you to do is look at data more holistically. Individual silos of data may be okay, but collectively, they can cause privacy harm. Most engineers will not know that because they’re focused on their product, their silo, their OKRs. Privacy by its very definition forces you to look cross-functionally and more comprehensively.
If you invest in privacy automation from a trust perspective, you are a) helping the company stay compliant, b) avoiding expensive fines, but c) improving data and infrastructure quality that will make trust possible because now you actually have a fuller understanding of your risk, which pre-privacy you didn’t.
I have found that all of these improvements improve data quality, which lead to better products as well.
I would think of privacy not as a cost nor as a blocker. I would think of it as an opportunity to elevate the entire organization in terms of maturity and productivity.
Based on what you’ve shared with us regarding the development of trustable platforms, what are some recommended starting points?
NB: When it comes to trustable platforms, executives should look at the negative first and the positive second. The reason is, when it comes to trust, I still see people thinking about trust as ‘Oh, it could only happen to those guys over there.’ When something goes wrong, we are the good ones? We don’t do anything wrong.’
In this worldview, it happened to Equifax because the management team didn’t move fast enough. But then the same thing happened to Colonial Pipeline. We had the Office of Personnel Management breached in 2015. Too many people go from being languid to panicky — nobody cares and then everybody cares. I would like for people to operate somewhere in the middle of the pack where people don’t optimize for bureaucracy and slow the company down in the name of trust. I don’t want “trust theater,” just like we often have security theater at the airport where we take off our shoes and our water bottles and we don’t always understand why.
I would avoid that excessive bureaucracy, but I would also make sure that there is continuous invigilation by way of monitoring dashboards and detecting anomalies so as to make sure that before something becomes really bad, let’s catch it and fix it and not wait for too long.
Once the system has been designed using the criteria that you shared with us, what can be done going forward to ensure that the platform remains trustable and is meeting the needs of the target audience?
NB: My advice would be ‘In data we trust.’ The reason I say this is because the only way you can maintain that trust portion is to look at metrics that are supported by data. How many people are accessing data at all? How many people are accessing this data every single day? If this data has not been accessed for three weeks, why does the engineer still have access? If this data has not been accessed in a couple of months, why are we still storing that data?
You want to take action based on your understanding of the risk and trust metrics. What if it were to go from no to yes, or yes to no, would this change your crisis level from yellow to red or yellow to green? Looking for those inflection points is critical because those inflection points may be a little random and scattered across the place, but over time they will reveal some trends that otherwise you wouldn’t get.
The reason companies often miss these trends is because their data is siloed. Having a more comprehensive privacy-friendly view gives you a cross-functional understanding of how risk is accruing.
Any investment you make in trust at the front-end will pay you back multiple times over.
Any additional thoughts you’d like to share?
NB: As a final point, trust needs to be everybody’s job. You cannot make it the CEO’s job if the engineers won’t follow-through. You can’t impose it upon the engineers without executive sponsorship. So, making sure that trust is something that is shared as a responsibility across the company is critical, otherwise it won’t take off. The challenge often is that people say all these things at the first principles level, but it never gets actioned.
What I have learned in my career is that bad data is worse than no data. Making that case intelligently is critical. You want to make sure that there is not just one person in the company who speaks with a trust mission; in fact, the buy-in should happen across the board because that’ll help you improve the employee experience. It will help you build better relationships with customers, but most importantly, it will give you something positive to talk about as a company brand. It will make it easier to hire more people, get into new markets, etc. The benefits are in front of us, people need to see this opportunity for what it really is.
- Building a trustable platform begins with getting the fundamentals right, including infrastructure, data, monitoring, etc. and building trust into each of these components
- Trust in data – including the metrics that are supported by data, such as customer feedback – to ensure that a platform remains trustable and is meeting the needs of the target audience
- Think of privacy not as a cost or a hindrance but as an opportunity to elevate the entire organization as a core initial starting point.