App Performance

Café with Senthil Padmanabhan from eBay, on the importance of speed consistency

Café with Senthil Padmanabhan from eBay, on the importance of speed consistency
 

 

Transcript of the App Performance Café episode with Senthil Padmanabhan from eBay, originally Published on March 17, 2022 More details on the podcast episode page.

Rui Costa:

Hi, I'm Rui and welcome to the App Performance Cafe, a podcast focused on mobile app performance. Today, I have the pleasure to have as a guest Senthil Padmanabhan. Senthil is the Vice President and technical fellow at eBay, where he heads the user experience and developer productivity engineering across eBay's marketplace.

The reason why I invited Senthil is because he's the author of the beautiful article called "Speed by a thousand cuts" where Senthil basically shares eBay's experience with an initiative called Speed: where the goal was basically to improve significantly the performance of eBay's both mobile web, as well as native applications.

So, I will place the link to this article in the description of the episode so that you can take a deeper look at that. But today, Senthil and I will talk about an eye-level perspective of the work they are doing at eBay in respect to performance, starting by looking at what are the key metrics or key KPIs that Senthil considers to be more important for us to focus on when thinking about performance. And then Senthil will guide us through a journey that involves tons of very small optimizations, that when put together actually yield very significant results in terms of performance and overall user experience. Hope you enjoy it. Don't forget to follow us on the usual podcast platforms and please visit performancecafe.codavel.com.


Episode Start

Rui Costa:

Hi, Senthil. Thank you so much for joining the App Performance Cafe. It's a pleasure to have you on the show. 

 

Senthil Padmanabha:

Thank you, the pleasure is mine - looking forward. 

 

Rui Costa:

Me too. The reason why I invited Senthil is this spectacular article that he wrote, which I share a lot with my engineering team - every new guy that comes in has to read that same article  -  the article is called "Speed by a thousand cuts". Um, so I won't tell the end of the story, but I love the article because it's a massive initiative to improve performance in general. And so, usually, I start with the question of why should we care about mobile app performance, but I guess that adapting it to your case, the question would be -so you built an initiative called Speed if I'm not mistaken, right? Um, and the question would be, why did you feel the need to do that? So why was it necessary and what were the expected outcomes of building such an initiative?


Senthil Padmanabha:

Good question. When you think about speed, I believe it has become a very fundamental expectation from customers, especially the new customers or the customers of the modern age, that your experience has to be fast. And it is just not that it should not be fast. It has to be consistently fast. I think that is the key point that we need to highlight. Um, just having one interaction fast and the rest of your interaction not fast is not acceptable these days. Um, and the other factor is also predictability, right? People expect your experience to be predictable in every, every floor, and every interaction you have. So we already knew the study, but having, especially eBay, being an e-commerce company, we know that the speed is conversion. I mean, when I'm going to view a page as faster, it converts better. That's an old thing. I mean, everybody in the eCommerce industry knows that. And we have been doing a good job for a long time in that. Especially when in 2018 or so is when we took a pass on focusing specifically on speed, and we did a lot of product initiatives. And whenever you do a lot of product on the roadmap, speed takes a back seat. And then we started seeing it in the numbers. And when we did a competitive study with other industry peers, we saw that we were leading earlier in the year, but then we became a little lagging by the end of the year. And that was something not acceptable. I mean, just in terms of conversion, but providing a delightful experience. So we wanted to do the speed initiative, so that made it very clear that we are lagging. So we have to, again, up the game, but it was just not competing for the first or second spot, right? It is about making ourselves better and providing a system in place to be more rigorous after we meet the number, score that we can. And again, we wanted to make it a  consistent performance. So every page, every view in your flow in a user session should be fast. Every interaction should be fast. And that consistency is even more important rather than just making one instance fast. 

 

Rui Costa:

Yeah. And you touched on - one thing that I usually feel is that it's somewhere hidden, which is that consistency part. Which even different user sessions, like if I have 10, very good experiences, but one is very bad. It's sufficient for me to just say this sucks. And I move to another application, or I just don't get in back in, in the next upcoming days because I had a very bad experience. So, this consistency that you mentioned,  is what we should all aim for when we have an app or a mobile web application. Um, but it's, I would say that it's the harder part to achieve because you don't control the entire piece of the puzzle. so for example, talking to my case, we don't control the wireless networks, but there are a lot of many other variables like device diversity, location diversity, user diversity -there are a lot of things we don't control and we have to make sure that everything is very, very smooth in any case so that we don't provide that single bad experience that is sufficient to kill,  the user expectation with respect to the application you have. And so, yeah. So that's, you touched like from the very beginning on a point that is very dear to me because I always fight with, within our engineers because they look at averages and, you know, and I always - Hey man, you have to look at the tail. You'll have to look at the tail because that's where the devil hides, right? It's that one experience that will kill everything. But anyway...

 

Senthil Padmanabha:

Good point. Also, I think we look at - it's a psychological fact that humans give more weight to negative things than positive.

 

Rui Costa:

Absolutely.

 

Senthil Padmanabha:

And I mean that is just the evolutionary process of flight and fight rate. So, um, so that's what we tried to say, folks that on the tail you're saying, and that will make sure that you are, you're providing a good experience because that is that one bad experience is what is going to linger in their thoughts when they are done with that transaction.

 

Rui Costa:

Absolutely. The thing I love about your article is I think I read it a while ago, but I think it's in the very beginning that you say, or you show that. So you went through this initiative to improve performance. You achieve the results, you are aiming, but there was not a single silver bullet. So it was a lot of small things that you guys had to do to be able to achieve that. Were you expecting that from the very beginning? 

 

Senthil Padmanabha:

Yes, I was expecting that because of the fact that eBay was already an optimized experience site. The website was already fast, right? When you start a new company or when you are doing something new, there are so many low-hanging fruits that you can go and try to solve and make your platform better. But eBay has been in the industry for quite some time. So we already covered a lot of these obvious things, that will give you a big performance boost. So we're already very clear on the fact that we are not going to do any basic optimizations at this point in time. It's just that, some of these obvious questions: did you put it in a CDN, and are you caching your assets properly? Are you compressing them properly? I mean, all of these questions were all answered a long, long time back. So I know that it's not going to be like, oh, you do this optimization suddenly you're going to get a big win. And it's very difficult for a big company to move that needle when you already have been that and doing that for quite some time. So I sent this context very clear to the team that you're not going to get this, don't expect this to happen. But instead, let's focus on some of the things that have been neglected for quite some time. And that'll be basic, but things that we usually neglect and let's just see it for a period of time. So continuous improvement is the key here. And it was a, it was the right timing where we were telling people, right? Even when you improve your system by 1% every day,  it grows exponentially at the year that's -a compound effect, but you don't see it on an everyday basis because every day just 1% improving. So I was telling the team about that and they were all like, "Yeah, ok" so let's not worry about it over a period of time you'll see improvement. But don't worry about it on every, every launch, and don't keep sweating about the details or did I reach the numbers? Did I get the conversion and things like that? So, that context was set very clearly. 

 

Rui Costa:

Yeah. So, so I have one question for you, which we'll jump next, which is related precisely to that part of how do you motivate the team to go through like, 1% each time let's say, because I guess it's- at least from my experience -that's not always easy. But before jumping into that, I think one of the things that is very interesting in your article is that you don't try to optimize tons of things. You focus on a few metrics. Can you tell us a little bit about those metrics and why have you chosen those?

 

Senthil Padmanabha:

So when we did this metric, this initiative in 2017, the primary metrics that we took were the loading indicators. Um, in terms of, I mean the most primary metric was about the full content. Like when a user goes to search or when the user clicks on an item on a product, what they see as soon as they click to download the full content. I think that was the main metric that we tried to optimize when we call it time to above the fold in the web, in the web world. And we call it a VVC like virtual visual complete in the native world on iOS and Android. To give you an example in the search page and when you are in a grid view, let's say in a desktop, then when the sixth image, sixth item image it gets loaded- that's when this beacon is fired. Similarly for the item or product page in the main gallery images loaded this at this beacon is fired.

 

So this gives us a good indicator that the customers know that things are getting loaded up and it's ready to implant. Having said that, if I am doing that initiative today,  I will also focus on interactivity.  Just not the loading, interactivity was just coming up at that time. But, um, I would also focus on -learning is just one aspect of your user journey, so the users just do not want to see the screen. They want to interact with it. They want to click on, add to cart, buy it. They want to scroll them and just then go to the next image and things like that. So  I would also start measuring interactivity, if I'm doing it today. We already cracked it, but that was not back when we did the initiative back in 20, sorry, in 2019 and 2020.

 

Rui Costa:

Yeah, that's very interesting. So the previous season we had guests from Farfetch, like the luxury fashion e-commerce platform. And they have, they actually published the very clear correlation between time to interactivity and revenue. So that's very crystal clear in their - and I guess it applies to any e-commerce use case. There is a crystal clear correlation there because as you mentioned, it's not only about showing the content it's actually providing the sense of you can do something with it. Otherwise, it's, it's useless to you put all the images there, right? But if you cannot act, it doesn't feel instantaneous, which in the end is what the end-user is expecting.

 

Senthil Padmanabha:

We started going towards that direction in 2019. So when we started the initiative, people are all talking in the industry about interactivity. Um, but in 2020 and in the last two years when it became somewhat a primary metric for a lot of folks.

 

Rui Costa:

I think that the article from Manuel was from 2020, end of the year of 20 21 early in the year. So it's very recent there. Yeah, it's absolutely, it's something that is popping out very clearly in the industry. So going back to the 1% improvement, every day and to motivating the team to chase that goal and keep on track. You use something that, I think you call it speed budgets, right? So, or perform, I think they're either speed or performance budgets? I'm not sure - it's speed? 

 

Senthil Padmanabha:

Speed budget. I mean, the industry does performance budget. We call it speed budget.  

 

Rui Costa Costa:

Speed budgets, ok. So can you tell us a little bit about how do you set them? Um, and what's the impact that it has on the team and on keeping the team motivated and on the end goal. So on the end outcome of, of the efforts that you are building, 

 

Senthil Padmanabha:

The speed budget is just one aspect of what you're saying. I think first, when we started the effort we want a clear alignment and we wanted the alignment, both tops down and bottoms up. I say it for every major initiative. Right? I mean, the top-down is to have the leadership convinced that this is a very important initiative so that you get the right funding and prioritization. But if you should also get alignment from bottoms up with all the engineers working on it, because that only you will have an impactful initiative. Or else they will just work on it, on that task, and then they forget about it and then you build a, you make it faster and then your systems are progressing. You don't need it. You want every engineer working on this initiative to have the bigger picture. Telling them that they are making an impact for the customer, rather than that one task for the user story that they are working on. So we, I mean, this is all about storytelling.

At the end of the day, you have to build a very convincing story for the engineers saying that why they are working is very important and how it has a very big possible influence on our sellers and buyers. So you'll make that. Then you go and ask these engineers what they're working on. They're not going to say I'm going to work on a task, enable compression on this textual content. They will say, yeah, I'm looking to make my customer's life better or they're looking at the journey and they're looking at those things rather than they don't want the big picture on how that smart task that they are working is going to add on. I think that is the biggest motivation - knowing that they are there working on a smart task, they're working on a very specific task. I mean, that's how you get things done, but the impact is what they should look at the knowledge, what happens at the end when they do this by the journey on that leads to, I think that is where the motivation comes. And we made sure that every engineer who works on it clearly says that, "Hey, I'm making my customer's life better", rather than saying, "Hey, I'm doing this JSON optimization".

Not like I'm doing, that that's one thing. The speed budget was basically the direction on where we have to go, because speed is a very subjective topic. Um, a lot of folks know, right? Like we don't know how fast is fast. In fact, that is a brilliant paper that was published like 20 years ago - in 2003, by the same title called "How fast is fast enough?", by this person called, Peter Sevcik  I guess. And what they say is that is a good argument and said that they say, I mean, it wasn't 2003, right? So the internet was not as fast as it is today. So they were saying, oh, for some complex interaction, ten-second is acceptable. And then they were saying, arguing on the same paper, but another guy, I think his name was Peter Christy but you were saying everything should be less than one second. And then they are trying to come to a conclusion saying, oh, if there are so many elements in a page, then you can have a better, it can have a more lagging performance because people expect that it will be slow. And this, the bottom line is that there is no good answer to it. Like people are arguing, debating and they were all trying to make that. That's the beauty of the paper and they try to come to a conclusion. But in reality, what I got from it is that there is no clear answer for how fast is fast. 

 

Rui Costa:

Absolutely. So not from, not from the e-commerce use case, but for the video streaming use case. I know a related story, which is like for many years ago, there was this Akamai report, which I think it's still is the industry standard, that says that when it comes to video and in particular to the video start-up time, if you load the video in two seconds, that's okay. After two seconds, that's when people start dropping out of the video. And so that's, I think still the standard. However recently, I think last year,  I saw a talk from, an engineer from Snapchat, where they do this analysis. So video start-up time and a user engagement or in their case, user churn. So how many people actually dropped out or move on from the snap and they concluded that yes, two seconds is a stress threshold. However in their case is if you don't load the video until two seconds, every user drops out. So the entire user by itself. Right? So it's, it's exactly the, on the other extreme. And this is funny because, it shows that the demand for, for the instantaneous feeling, let's call it like that, is increasing. And if you look at Snapchat and the user base, which is typically a younger user base, you see that it's even more demanding, so you're right. No one knows the answer. You always, I think you always have to test it out and see what, and when it's good enough. Right? When do you start getting those diminishing returns because you're trying to optimize so much and so much and you get no returns, but to actually know what's, as you said, how fast is fast enough? It's still very, very, very hard today. No matter what's the use case, I think. You just have to test it out. 

 

Senthil Padmanabha:

Correct, I think that's what we tried to do here. And when we, when it comes to the speed budget, we based it on two things, one was based on historical context. Like we actually look back on eBay's journey on how we have been doing with performance on some of these key, um, views on pages on the like Search Item homepage. And we got some approximation on, okay, this is why we have been best consistently. And then we also did a competitive study because that becomes very important. I mean, it's not, I mean, customers have so many options these days, right? If your site or app is down, they can immediately go to another platform as soon as possible. So competitive analysis is very important. So we took some of our industry peers and using the public, Chrome user experience that gets published every month, that it gets aggregated at the domain level, so we can go and check on how some of our industry peers doing. So that gives us a good bond, like an estimate on how we should add it. So we do both the numbers so that it, and then try to get a number based on eBay's infrastructure. Again, we need to consider that factor, right? I mean, some people have the global infrastructure. I mean, some people are very big. Some people are smart, so we need to come up with that, we should put that also in the transfer function on how we are coming up with this budget. So we took all these three into place and we came up with a budget that we felt was very pragmatic, reasonable, and something that will have a meaningful impact on our customers in the long run. 

 

Rui Costa:

Yeah. Super. That makes total sense. So. Now going to the, to the, to the cuts, right? The article is "Speed by a thousand cuts" - so now going to the cuts. So, one thing that I have to confess. I was surprised to see as a positive impact in terms of performance was the fact that you made an effort to reduce the pillow for, um,  textual resources, I would say like that. I was not expecting that. So why do you feel that? Why did you get the instinct to chase that one?

 

Senthil Padmanabha:

I think this again is an often overlooked feature. When we start developing features, especially in a company where there is so much a product getting rolled out, you keep only doing stuff that adds stuff to your system. You now look at removing stuff from your system, although it's not even being used by a customer, this just on not a velocity, people have to deliver products and we have to keep turning our product and we have to keep launching new things. So over a period of time, what happens is that you accumulate a lot. So it's not going to be a big deal when you're doing it in six months or one year, right? But let's say you're just launching products for three, four years, that actually adds up. You have a lot of unused, content, like in your, in case of JSON, on your response that comes from the payloads that come from your servers, either it be static [ ] users you're dynamically, or your JSON response that comes up, many, a lot of it is not being used by your customers. And it basically saves, I mean, again, it's the tiny things that add up. First of all, it takes bytes on the wire. It's pretty obvious. I think that we can all agree upon it, but also multiply it right by the number of users by the number of sessions they have and things like that. Right. So that is one impact. And the second is also, your device has to spend time processing back, although it has no impact on your customers, right? On the JSON end, you have to be parsed and that is a parsing time associated with it. It's more and more. And again, that is a multiplication factor here because every time it happens on every request. So all of these things together add up, if you see in the long run. And just doing that, I mean, you're just doing one of these most fundamental things. You're just removing code that is not needed to be sent to your users. Right. When it's software, it's fine. Because software impact is your impact. It doesn't even talk about some of that stuff. I mean, even that it has to be cleaned up. But I'm saying but it doesn't have a direct impact on your customers. But when you're sending something to the client, you have to be very mindful because you're sending something to your clients, like your customers' devices, and you ought to be very mindful on what you're doing it. And that's, that's sort of empathy you show towards your user base, and that's how we tried to clean it up. And it's generally good hygiene to do that. You keep doing that periodically. 


Rui Costa:

I, I confess that when I first read the article. That was the part that struck me the most because I was not expecting it, you know, because as you say, like 1% of the time and I was very surprised. Like if I was thinking like, what would be my first option? I don't think I would go there. And you're absolutely right, because it just keeps adding up stuff. I will call it junk, but it depends on, it depends on what specifically it is, but it just keeps adding stuff. I mean, as your mentioned. Then you have the parsing component, which even adds up more stuff that is not useful for, for the end user at all. It's just for your service to function. So it makes absolute sense. Another, another point that really, really, I find it very interesting because we, we do it on the client-side. Um, mainly, but you mentioned the. At least that's my interpretation of it. Like you talk about critical path optimization for services, which, and correct me if I'm wrong. Like my interpretation of it is you're basically telling the server to prioritize what's above the fold and do lazy loading for everything else. I know it's a very, high simplification, but that's what I took from it. Is that accurate? 


Senthil Padmanabha:

Yeah, that is, um, that is accurate. Um, what we try to do that is we usually have this concept of our default for the devices. Cause the device knows that viewport. And so they know what comes above the fold and they prioritize that. Browsers do that, on native apps we can try to do that. I mean, the top gets more priority. When it comes to the servers, usually the servers don't know whether this module is in the top part and the bottom one that does that come into view. And that's why fortunately for eBay we have this, architecture called experience services, which are services based on that are view-based services, , and rather than entity-based services, which are typical. So this service clearly knows that request came from an iOS device or an Android or a web device or a desktop. And it knows what this particular view of these are the modules and these modules come first. I mean, cause they start the modules in an audit and they give it back to your client. So they, you know, these three modules are the ones that has to be displayed immediately on the screen. So they prioritize that and send that in a streaming response first. So it's more than lazy loading when this learning happens on certain cases, but it's basically ... instead of having one chunk of response sent back to the client, we stream the response back in multiple chunks and each chunk is a valid JSON. So every time it goes to the client and then the client just passes, it does some custom parsing logic to process that JSON, and then you'll get the data and then the next streaming comes. So basically you don't have to wait for the response from all the other services that need to populate the ability for content, or above the fold content, which is how a traditional service works because traditional service has to give you the full address on response. So you need to wait for all the services that are involved for the response to come back. So your time is basically the max time one of these dependent services. And in this case even if you do [] relay right? I mean, so in this case, we only are restricting it to these chunks, which have the, we have above the fold. So we started introducing that concept and that really played off well for us too.

 

Rui Costa:

Yeah, it makes, makes a lot of sense. So we do something. We are building something similar on the client-side, which follows the same concept. So the difference is that on the client-side, we are basically adapting the experience,  for that specific end-user. But in our case is more related, like for example, stuff like video quality, image quality, what requests should be prioritized given your network connection. But I think at the end game or the end goal is exactly the same. That's super interesting to see that on the server-side, because honestly I ... I never thought about doing that on the server-side I always thought about doing that on the client-side. And it's super interesting to do it, actually, I would say on both sides, but on the server-side, definitely. So one of the things that we're curious about because I'm not sure I get it. Why did you, why was it so important for you guys to go for the webP image format in terms of image optimization efforts?


Senthil Padmanabha:

Again 2019 when we did this initiative, the right webP was one of the most optimized image formats available then, again it's abled and that is the new format[]it's a moving target. At that point it was webP and you're only using webP right, in our native experience and here and there, like on and off, like, um, That was also another thing that people have to understand. When you standardize on a particular image format across all your devices, then you also leverage the CDN caching, right? Because let's say somebody queries on an iPhone in a, these little static assets, right? On an iPhone, on an iOS device, or like they search for something on that, Android device. And then, that same image is the same image is what's going to be showing up for the desktop user too. Right. So we want to standardize on all these images. There's also the caching is meant and so you also able to use and help every other user. So it's a static asset. So user go and searches for query A and then a user goes on a desktop for the query A  so they get the same results back and that items will already be in the cache by the time that the second user comes into play. This is something that people usually miss. When you have large traffic and you have multiple channels, you need to have each channel help each other in terms of static assets. And the only thing that is shared between all these channels, is images. It's basically, I mean, it's not JavaScript, it's not [] it's basically images or video or any other media content. So you've standardized them and then you get the benefit not only on ... 


Rui Costa:

So basically what you're saying is that if you pick the same format for every platform, you're increasing a cache ratio because everyone is asking for the same images.

 

Senthil Padmanabha:

Correct? Exactly. That's something people miss usually, and that's what would happen. So next time, when we tried to go with a, with formats, for instance, we are already looking into it. Um, we have, we know that that is going to be a performance degradation initially, because of then we slowly ramp it up. That'll be a period of time when one device is amping up, and another  device is not. And then, so there'll be a performance there, but we know, in the long run, it's better because once all devices are synced up, the users are going to get that. 


Rui Costa:

Yeah, there is that period of transition, but yeah, of course, it makes absolute sense. So I have two, other topics that I was very curious to learn more. Um, about the predictive prefetching of data. So how do you do that? So what kind of rules do you have for knowing what data to actually prefetch to ensure that the data is already there, when you're trying to act as a user?

 

Senthil Padmanabha:

Predictive prefetch can happen in two scenarios. One is static and one is dynamic. Um, so the static thing, what we tried to do was on the web flow, where the user is on a journey, again, this is a notion that we need to be very clear that eBay, doesn't sorry, the users don't come to a platform just for one view and go away, especially in e-commerce where they can be... it's a journey. They usually come to the homepage. They said something and they visit the item. And then probably they will leave. Oh, they do the whole checkout and then they will leave. So we need to be very clear that a user session is a journey. And the idea that is why don't each page help each other in the journey. So when a user comes to the homepage, that has a high probability, we already have this data, right. And analytics already has this data saying that this percentage of users who come to the homepage, always land in such. 


Rui Costa:

Okay. 

 

Senthil Padmanabha:

So we already have that for dynamic information saying, okay, most users come and go to search and most of the time users go to the item page. And probably from the home page daily deals is another page that the users go to. So we know these are the top two destination points from the, from the first page, from, from the host page. Um, so what we do then is then we go, so since we know that it's going to be searched or it's going to be the second one, the second page. We going to try to fetch those static assets like JavaScript and CSS. And that will be this again, they did not have a huge impact as we expected because of static assets at anyway, cached in your browser. So you're only going to help with the first-page sessions. 


Rui Costa:

Okay. 

 

Senthil Padmanabha:

Um, so for a new user or a first session, so I would again do some testing around this because this probably helped us a little bit, but not in the long run. I'm just being transparent here on the impact of this change. And especially because of subsequent resources, subsequent requests are anyway cached. So, I don't know how but .. The other thing that really helped us on the dynamic thing was, on the sales page to because we started caching the top five items, um, on,  so we know that most of the users usually click the first five or some items, right? And so we try to catch them on the server-side and keep it ready when the user clicks on it[] actually that had a bigger impact for us rather than the static assets that I was highlighting. So it is basically heuristics that you need to apply some heuristics. These are all on the AdWords optimization site, because again when you do dynamic caching, you need to make sure that your metrics don't get skewed because the users, your casual users don't click on it. That's basically a no object. Nobody saw that session. Um, so you've got to be very careful on how you order metrics. It requires a lot of effort when you're doing dynamic caching. 

 

Rui Costa:

Yeah. Yeah. I can only imagine. Definitely. So finally tell me about speed champions. I love the expression that you guys use. 


Senthil Padmanabha:

This is again, the notion that we want everybody to be empowered and own the speed rather than one particular team or an initiative owner, having responsibility for this and this again, makes it very organic for everybody to make speed as a foundational element in a product life cycle. So we identified a couple of people who are genuinely invested in speed across the company, across these domains. And we told them, Hey, you guys are responsible for this budget so you folks decide on, on what is, what is needed to make sure that your products don't regress and you are involved whenever that is a major effort in one product delivery. Um, so that they come to you and tell you, Hey, I'm going to do this big product feature. So I need to watch out for speed so they can go and adjust this accordingly. So these folks are the ones who are responsible and empowered to even stop it at least, with, from not happening. 

 

Rui Costa:

Performance gatekeepers, right?

 

Senthil Padmanabha:

Performance gatekeepers, but mostly like people who are responsible for the budget for that particular domain. So these engineers talk with their other peers to make sure that speed is getting considered. Um, but having said that we have a system that already detects that ...  We don't believe that. I mean, we don't rely on humans to do those speeds.

 

Rui Costa:

You do in CI/CD? 


Senthil Padmanabha:

If you have a CADC system,  whenever the code goes to production, it has gone through this synthetic testing on performance testing. When it doesn't meet the budget, it is blocked, so that is this tool, which is taking care of this regression. I mean, either it is happening in the pipeline or somebody submits a request to this performance tool and it gives you the result. Okay, you're good to go or not good. And then, folks make a call on that one. 


Rui Costa:

Interesting. That's very interesting. Senthil, I feel that we could be speaking for hours, but let me jump you to the final challenging question for you, like, um, I always liked when it's like that, which is, let's say you meet someone on the elevator and you know that this person is building a mobile application and you have 30 seconds to give them your very best advice. What would you say? 


Senthil Padmanabha:

The first advice I gave you, is just not to focus on performance as, a unidimensional thing. It is basically about consistency. It is more than just one impression that you're giving it is about predictability and consistency. You need to do that throughout your mobile experience and create a culture of this concept called the critical release part, which should we talk about? Right. What users are going to see when they interact is more important than all the other things that happen in the app. So you need to come up with an architecture that focuses on what users are going to impact, what users are going to see- and then do all the other things behind the scenes. So that you'll know that the users, because users don't care about all the hundred things that are happening in your app they only want to get that job done. And this is something that we all do, our customers and loyal diamonds, one of the most valuable things in the world, and somebody is giving you that attention and you got to be very mindful of that and make the best use of technology to get, to get the best use of will to get the best out of your customer's time.

 

Rui Costa:

Wow, amazing - super answer. Thank you so much. It was really a pleasure and very insightful, view on your view on performance. Super, super thank you so much. 

 

Senthil Padmanabha:

Well, thank you. Thank you. Very glad to be here. I learned a lot too. Um, and then you were looking forward to this podcast, getting this podcast. 


Rui Costa:

And thank you all for listening. Don't forget to follow us on the usual podcast platforms like Spotify, Google podcasts, or Apple podcasts. Thank you so much.

 

Follow us for more episodes!

 

Apple Podcast - The App Performance Café by Codavel Spotify - The App Performance Café by Codavel Google Podcast - The App Performance Café by Codavel