Café with Nolan from Twitter, on the psychology around performance

Café with Nolan from Twitter, on the psychology around performance
 

Transcript of the App Performance Café episode with Nolan O'Brien, from Twitter, originally published on May 25, 2020. More details in the podcast episode page.

Rui Costa:

Hi everyone. I'm Rui Costa and being a network engineer I'm in love with two things; mobile app performance and coffee. So that's why I decided to start the App Performance Cafe, where I'll bring the most knowledgeable individuals in performance to have relaxed conversations. 

Today, we will have Nolan O'Brien from Twitter. We will start by discussing why we should care about mobile app performance, and then go into the psychology behind User Experience. Then two ways to handle user frustration and how to measure all this. So hope you enjoy it, and don't forget to follow us on the usual podcast platforms, and visit performancecafe.codavel.com.

Episode Start

Rui Costa:

Welcome to the App Performance Cafe. My name is Rui and I'm very excited to announce our guest today  - Nolan O'Brien from Twitter. Nolan, thank you so much for taking the time to join us and share a little bit of your views on mobile app performance. 

 

Nolan O’Brien:

Thanks, Rui.

 

Rui Costa:

Well Nolan, can you tell us a little bit about, uh, who you are, what you do and your experience around this topic?

 

Nolan O’Brien:

Yeah, sure. Briefly, I'm a software developer at Twitter, as you mentioned. I've been doing software development for 15 years, and pretty much my entire career has been focused on  performance, specifically around networking and on native applications. So you know, back windows, Linux, before there were phones and now that there's phones iOS and Android devices.

 

Rui Costa:

Yeah, interesting. I'm very excited because in our preparation call, I noticed that you're all over mobile app performance, you touch every point of it. So I would like to start by asking: Why should we care about it? So why should we care about mobile app performance from your perspective? 

 

Nolan O’Brien:

Yeah, I think there's a lot of ways to answer that. I think we should care about mobile app performance because we care about users is the easiest answer. I think one of the first things to do is to disambiguate performance. 

As a term, I think we all know here on this show what it means. But whenever I bring up the topic of performance at work or among colleagues, or even outside, there's a lot of confusion about what we mean: Advertise and revenue performance,  engagement performance, whether users are interacting or not. But what we're not talking about, is that we're talking about the things that are barriers for users being able to use your app. So that's why it's important because you can have the best app in the world. I like to use the analogy of your app is like a party, right? The better your app's features are, the better that party and more people are going to want to stay because of how compelling your features are that you built. However, if you don't have application performance, people can't get in the door, and if they can't get in the door, you're keeping people out. So get them through the door, having performance get out of the way completely so that they can just enjoy what your application provides.

 

Rui Costa:

So it's both an enabler and a non-blocker. Would you say so?

 

Nolan O’Brien:

Yeah.

 

Rui Costa:

How would you describe that from the user perspective? So from what the user feels, let's say. Yeah. So, um, we use our fields, a lot of things really in using the app.

 

Nolan O’Brien:

Yeah. So, the user feels a lot of things really when using the app. I think a good example of what we're talking about with the different areas that users can experience with performance is what Google has before in their Next Billion Users program. They break down performance into several categories. There's connectivity, so the speed. Pretty much all apps run over the internet now, so speed over the internet. There's the device recognizing that people all over the world have different devices at different levels. You and I probably have the most recent Android or iPhone, but the majority of the world doesn't, so there's device. And then there's the data cost. Data is not free and for a lot of the world it's quite expensive. Then there's the battery itself. These are mobile devices, so keeping that device up and running is really important. And then last, the overall experience, which encompasses all of those, but in the glued together user experience that you're providing the user and how you can smooth out the edges where you have difficulties with performance getting in the way.

 

Rui Costa:

Wow, interesting. So you mentioned user diversity from some perspective. Would you say that it's all about networking devices? Or for example, different use cases of different perspectives from the user in terms of the tolerance that they support to get into the door. To get the right content in the application, for example.

 

Nolan O’Brien:

It's absolutely a good point in that every user is different, and catering to an average is something that's very common for making mistakes in how you build your apps. If you build everything around the average user that you already have, you're going to miss a lot of the nuance that a lot of people have in where their thresholds are for everything. Some people, they don't mind their battery being drained. And so they can handle that. And some people don't mind actually, a slow connectivity. Some people have built up a resilience to that. And so it's being adaptive for the users' use case is a difficult challenge [inaudible].

 

Rui Costa:

And how do you handle that in an application? How do you handle such a diversity: device diversity, network diversity, user diversity, and user expectation diversity as well. So how do you handle that?

 

Nolan O’Brien:

I mean, I think diving into how we handle it is kind of a complicated thing to say. Recognizing up front that there's diversity and then maybe building some principles on how you start achieving that. Measuring a lot is really important, and when you measure, be really rigorous about the numbers you're using. 

For instance, understanding that user cords all over the world that have different behaviors based on what they care about. For instance, there are regions, I'm not gonna name regions, but there are regions that are very tolerant to a slow connectivity, but they're very intolerant of data usage. So incorporating that into how you build things is really important, but in the nuance of how you're actually going to start tackling it. You're taking a step back. I

 think one of the most important parts of before even tackling performance, is the psychology of what performance is to people and understanding that you are interacting with a human with your app and based on the foundations of the psychology of performance, focusing on speed as an entry point and the experience that they have, can really help ground, the way you design your apps and move forward with where you need to optimize and whatnot. 

 

Rui Costa:

Wow. Can you share a little bit more about your view on that psychology from the user perspective? Myself as a network engineer, I tend to look at performance as speed and that's it . At least from my natural instinct to think of performance that way. Rather curious to learn a little bit more about how you look at that psychology from the user perspective, with regards to performance. 

 

Nolan O’Brien:

Yeah. So I think that helps. That's a good grounding is that, you know, when you're network engineering like I'm with you, like you can always make things faster. So shave off every millisecond everywhere you possibly can. Right? But at the point there's actually a return on investment, an ROI for where you can make improvements and where, where you make improvements actually has no value. And so. Getting the user's perspective on where the latencies are hurting them and when they're not is really important. So from a user perspective, there's a psychology behind what speed is and latency feels like. 

So in 1968, there was a documentary that went through the concept and psychology of powers of 10, which is that things increase by a power of 10, that increment buckets in a user's mind, the next level. And when in the 1970s, they expanded that just from physical objects into time, there was an understanding of temporal effects that powers of 10 also translates into directly. And then in the 1980s, there was a thorough amount of investigation and the nineties into speeds of human interaction with computers. So human-computer interaction. And that was a very full field of research for a long time. And the principles is effectively that is that as the powers of 10 of the speed go up, or the latency or slowness goes up ,the mental model of how something is performing changes. What's important is to identify where is the task at hand supposed to fit into this bucket for the user, and how do you make sure you fit into that bucket? So just to go through the ones that have been established for powers of 10, when it comes to anything under a hundred milliseconds, so a 10th of a second, it's considered interactive. Now a 10th of a second per frame rate is really poor, but we're not talking about that. We're just saying as a user is interacting with something directly, if it takes less than a hundred milliseconds, it's interactive and then something that's one second in latency or duration - that's responsive. Now, it's taking time, but the user doesn't get stuck. It just takes time and it's very responsive tasks.

 

Rui Costa:

It's perceived, but not boring.

 

Nolan O’Brien:

Yeah, not boring. And then you get into the harder areas, which is like 10 seconds. Something is working, it takes time it needs to but it's working. And so I'll talk a little bit more about the threshold between responsive and working work, but you can tell that the user has transitioned to "This isn't just something that's going to happen. I have to wait for it".  And then the last thing is something that's, a minute or longer. Which is that you've got a series of tasks or a job to do, and then that combination or workflow can take that amount of time. So you could break down some examples of that, like interactive would be like tapping on your navigation,  and the navigation transitioning. Responsive would be something on a quick network or filling in over in one second or less, some texts being sent or received. At 10 seconds something's working. That's like, you're posting something and it has a little bit of data, enough data into it that they know it takes time. So maybe posting as an individual on the internet. And then there's a concept of like a whole minute where it's like the entire workflow of sending something. So you're posting something out to Reddit or whatnot. You're composing what you want to say, you're editing it, you're downloading an image that you want to attach, attaching it, uploading the whole thing that's in a minute. But where we come down to, where we can really drive the reduction in users getting frustrated is seeing what bucket an instance of a user experience falls into, and whether that those are matching and trying to set a threshold to make sure that you get under that, but not over optimizing such that you drive it to zero because you can't drive everything to zero.

And I think a great example is really the things that are going past 10 seconds that need to get under 10 seconds, and the things that are 10 seconds or so are above one second that need to get down as much as possible. And I think a great example here is that when a user transitions from seeing something that's taken more than a second, but has not taken 10 seconds yet, there's this serious psychological effect of like - I call it dreadlock - where they're they transitioned from anticipation of something's not responsive anymore, but when's it going to finish to effectively dread of "Why is this not done? Why is, why is it still happening?". And trying to get either under that dreadlock or salvaging the user experience, because it's going to take long and reinforcing to the user that this will take time is really important. So that's why, as a principle, trying to get things under three seconds that are supposed to be responsive in the majority case, under one second in the target case is really important. And things that can't reach that, really building out the fundamentals of you have to communicate to the user, visually the progress.

 

Rui Costa:

What would you say between one second and ten seconds, that gray area, would you say that it's a step function in the sense that there is a point after which the user really gets frustrated. Is it a linear function?

 

Nolan O’Brien:

I think it's more gravitational. In that the first three seconds that gravitationally the user is expecting something responsive and it just was just not fast enough, after that three seconds they snap to this frustration point of something is not a responsive thing and it's required work. And if you're not providing context to the user, particularly planning progress of where you are in the process of delivering on the task at hand, they've entered into frustration and it's just a growing level of frustration. Like you said, every user is different. So the step function of that is it will be linear, hyperbolic, logarithmic - really depends on the person - but it's fairly well understood that once you leave the expectation of it being responsive and go into the understanding that what was start going to be unresponsive is now work. You are growing in frustration over time, and that will not stop until you either save the user by finishing or provide them a relief with understanding that completion is going to take time and how long that will take. 

 

Rui Costa:

How do you handle that? So I'm thinking about cases where you do know before and that for the majority of users, or at least a subset of them, you will not be able to fulfill the, say one second or three second threshold, how can you handle that frustration? How do you manage that? Are you focusing on optimizing the delivery of every content in that specific action? Or can we do something else? 

 

Nolan O’Brien:

There's, there's a number of routes, right? The thing that's important is that the point at which you were not a responsive task in providing the user progress. And this leaves sort of the psychology of powers of ten into maximizing the user experience within that transition period of going to a ten second bucket of work is the psychology of progress. So let's think of it for a second. A task takes time, right? Looking back in reverse, oh that task took five seconds. Right? And you can think of it as like, okay. A linear progress of time is five seconds. So if a user was to enter into something that was going to take five seconds and you could provide a linear progress bar that said, or some sort of progress indicator that says you start from nothing and you progress up to that entire thing in a linear fashion. That would be a twin reflection of accuracy to the user. And that's, that's a good thing. Right? Having a linear progress is very difficult. It's very difficult to achieve linear progress. When you use your apps and you use iOS or Android. I think a very common thing that people are noticing these days when it comes to progress is the finished slow progress; is sort of like regular cadence, and then it starts slowing down more and more and more until the end. It's just really slow. It's painful, right? This is the absolute opposite of what the user really wants. And so with this finished slow progress, there's been psychological experiments showing how much worse that experience is.

If you take the linear experience, that's ten seconds and then you provide the exact same duration, ten seconds for the task, but you progress it in a slowing down fashion, the user that slow down experience a hundred percent, we'll say that was the worst experience. Even though the amount of overall time for the experience was the same, because it's not just the expectations being met. The great thing about linear progress is expectations set and it's met and it's perfect. Right? Slowing down is that expectation is met with the velocity upfront, and human mind is very good. It can latch onto velocity right away. But when that velocity erodes, every amount of erosion it detects is breaking that trust and it hurts the user experience. 

And so that's a bad problem. And then the reverse, if you were to just set out the opposite situation, where you speed up the progress. And you start slow and then progressively get faster, faster until it completes. And that is also exactly 10 seconds. Although it was exactly 10 seconds, like linear progress, that would be perceived by a user as a better user experience. Because once again, we revision in the user's mental model of assessing the velocity of the progress. It keeps getting better. And so you're breaking the expectation, but in a positive manner, And so the summation of those positive reinforcements leads to the end result of identical amount of time between three different progress mechanisms.

Last, if you can finish fast is the most compelling. It's very difficult to accomplish though, because to sort of like artificially build something that speed up progress would require really good concrete understanding of what the temporal impact could be. And so you can't really even fake that. So what you have is this problem space of an undetermined amount of time, something will take, but you have progress in sort of the effects that each step of a job will take, whether it's latency of downloading and then latency of decoding or vice versa of coding something first and then uploading it. And you don't really have an idea of what those are, but you can show the progress along the way accurately, but not faithfully to make the user have a good experience.

And this is where you start getting into some. Clever solutions. So continuing to preserve progress, figuring out how you can get into the mind space, how you can relay the same info, the same concept of progress to though, but in a way that is much more satisfying.

 

Rui Costa:

Not with a progress bar, I guess.

 

Nolan O’Brien:

So progress bar is still there and we'll, I'll get out of progress bar for a sec, but there's a few ideas with that. So there's the concept of indeterministic, problems. This is why you have a lot of degrading progress bars. It's like, we think it'll take five seconds. And so it's a linear progress to about five seconds, but then the time isn't completed.And so you have to degrade the progress, it's like, well, we didn't meet your goal. So we'll slow it down a little and we didn't meet your goals. We'll slow down a little bit more. We have this degradation. Now the degradation, the problem is that your completion is at the exact same point as your degradation finally reaches its target.

If you could front load that and say, we know that these, this part has an indeterminate factor, right? And be great early, but you reserve a substantial amount of the progress for the end. So say it's a ten second task. Can you degrade, but you only get to 60% by the time that 10 seconds elapses. And then you take the last third of a second to one second to quickly ramp up the speed to completion. You can actually take 11 seconds with a progress bar and they will feel better about it, than a linear progress bar that takes 10 seconds. That's because the finished fast experience really impacts the user's perspective. So basically the point is you can erode the user experience as you degrade, but resuming the experience can counter that and the faster you're resuming the experience, the faster you will recoup that loss of expectation and exceed it, making it a good finishing experience. We call this the principle of Finish Fast. 

 

Rui Costa:

Can you give us an example, I'm thinking about use cases like screens in mobile app, where you can apply that rationale?

 

Nolan O’Brien:

Installation of an app with Android or iPhone, right? There's a latency to the amount of time it takes to download the application. And then there's a latency that takes to install the applications. It's a lot of work, right? Like the latency of the download is a lot, and the installation of the download is a lot. Now, unfortunately, because it's such a large thing, it's fit into a very small package, particularly on iOS which is a small circle. Android has a better experience because I believe there's a look down, you can pull down and you can see the full progress bar going across the whole screen.

Now it's kind of silly, but that line being long actually helps the user experience. Having something small and tight really reinforces that you're not making progress even when you are because you can't, with granularity display it. Well just there on itself, that there's a problem of the psychology of showing the progress and then within that really changing the expectations, such that instead of it being your bytes per second, or the exact amount of bytes you've downloaded versus of bytes, you're going to download matching one-to-one and the progress, and then the installation of extracting and putting it onto the place and just taking a one-to-one: bytes expected, bytes destined completion, and having those side-by-side. Taking a step back and understanding that there are those two steps, but you are incurring a lot of promise that those will map up both equally - so 50/50 if they were - but also that they would end up mapping to feeling satisfied in the completion.

And so the better experience might be to see that upfront and maybe front-load both parts and say, I'm going to take 25 to 30% and say that that's downloading. The next part, which is installing, is the later part. So I'm going to actually take a larger portion, 40 to 50%. And then the last 20 to 30% is actually nothing. And what I'm going to do is show the best estimate of that progress as it's happening front-loading the cost. So it's that the completion ends up having a significant enough amount of the progress bar, or circling in case of iOS, and then having a fast completion so the user has the satisfaction of "Oh, it didn't take as long as I was going to expect, because I was expecting would take five minutes because of how long it's taking that circle to fill.But in the end, it really ramped up and completed and I'm really grateful". And so that's an example concretely of how you can take advantage of that. 

So I think short progress being something tight is good. Once it gets larger and you're going across the minutes, I think small, tiny UI is a problem, and when you really want to have a place, you can look up how the product is going. So for instance in the Twitter app, you can continue navigating to the Twitter app as you're posting your tweets. And then we have the progress going so you can keep track of it, but you're not stuck. You're not locked by that progress. That's just with a progress bar. Progress can be done in many different ways though.

 

Rui Costa:

Yeah, I was thinking about - how does that relate to delivering a page in an application where you don't deliver the entire content at the same time, but you start fulfilling the critical content as soon as possible and then you include extra content. So how do those two sides correlate?

 

Nolan O’Brien:

I'm glad you asked that. So the progress bar is basically you have nothing you can show until the whole thing is done. Which is you're always stuck, whether you're watching you or not, you always start by not having the thing until it's completed. Progressive loading is the concept of you can progress in the content as it loads.

So starting simply, I think everyone's familiar with video, in that you have a amount of time that you have to buffer, but then you're loading it in progressively as you continue to watch it. And it's a wonderful experience, right? Because maybe the quality starts a little poor, but then it ramps up over time. Noticeable, I think mostly on Netflix or Amazon prime, it starts pretty great quality, but over time, like 30 seconds, it gets to be pretty high because you have this progressive income and then you're just experiencing it, right? You're no longer waiting. You don't have to wait for the full buffer to fill because you got into it sooner, lower quality, but sooner, and you're unblocked.

And similar things can be applied on any amount of content. So if you have a page that you're trying to load, and it has all kinds of media in it, and it has all kinds of text in it and all kinds of ads in it, right? You can progressively load that as well. Right? You take the most critical parts that you want to service the user and you get those upfront and then you're raised synchronously, bringing in the pieces as you need later on. Then the final thing that I think we've done a really great job of at Twitter is our progressive loading of images which, a lot of companies do this, but the concept is if you can get a low quality image while it's progressively improving in quality, when you have high latency, you have this amazing experience of what used to take 30 seconds if you're on a 2G network. Now it takes you three to six seconds til we actually get something that you can see what's happening. And this is great, you're going through an experience. We'll use a feed of a post, right? And so you see this post from someone you know, and it's coming in, you can't see what it is at all. If you have to wait the full 30 seconds to know what that image is, it's very supplanting if it's not even an image you want anyways. But if you've got three, six seconds and you get a good idea of what it is before it's fully clear, like, "Oh, it's my friend's baby. I'll stay and watch and see the baby's doing" or  "Oh, is my friend showing off their new car. I do want to see that - that's a cool car". Take a picture of their lunch. You know what? I’m gonna keep scrolling, I don't have to wait. You can make an informed decision as a user early. And that's empowering. That's empowering to start consuming what's there. Mentally gauging yourself for what you want, what you're consuming and whether to disengage or not, because you're locked. You don't have that.

So progressive loading is an amazing feature just for images, but also for any content that you can load in piecewise, instead of the whole thing. And I think talking about posts is great. So like in Reddit, you don't load the entire page of posts, you load the top, and then you keep holding the next while you're waiting for the rest to load, you can see the top to bottom, and this really helps if you experience it with like a 2G connection.

So I think a lot of people, like I have a very fast wifi connection at home, the newest iPhone - I never see this experience. But if I can experience it by forcing myself into a 2G scenario, it was really compelling where I can interact with something immediately, immediately as in like three, six seconds versus three to 60 seconds.It's a game changer. 

 

Rui Costa:

Yeah, absolutely. Are there cases, so you're basically, you're saying that we decrease the quality of the content, let's say, or at least part of information, to improve operability from the user perspective? Or are there cases where - ?

 

Nolan O’Brien:

I'm just going to rephrase what you say though. I don't think anyone decreases quality, right? I think what happens is you have no quality of nothing present. Then you progressively increase quality and fidelity over time. And so I don't like the comparison of "Well the final product is this quality, but you're only showing this quality so you're degrading it", that's not the comparison. The comparison is nothing on the page versus something on the page, and that 's why it is better quality than nothing going in the right trajectory.  

 

Rui Costa:

Are there cases where , then on the other hand, are there cases where you see that it's more important to deliver reach content, so heavy content from a network perspective, versus, so it's more important to do this versus trying to optimize, like for going to other types of conduct, like texts in the screen or similar?

 

Nolan O’Brien:

I don't know if I get the scenario you're putting up. You're talking about - Oh, I see what you are saying. It could be that that's more of an adaptive experience. So instead of progressive loading, what you have is the concept of adaptive experiences. So, if you're trying to load content and you have a plethora of content to select between, and you're trying to get in from the user being smart about that content coming into the user so they can consume as much as possible is valuable.

So if I was just gonna shove video after video, after video, after video to a user that's in Sao Paulo, Brazil, that's not going to be very compelling when everything is frozen and you don't see the video. And then when it does load, it's low quality and you can't go to the next content. While if you could diversify and say, listen for people on those high bandwidth, maybe that is the experience that you can give them.

Maybe they're in the United States with a great connection. But what if they're far enough and they need a mixture? And then you can start mixing in images with the video or texts with the images. And maybe they're so bad that you have mostly texts, but the text is usable. And then you have images along the way, and there's even better options, right? Like one of the things is being adaptive and trying to serve users to what you think the best experience is going to be. But you can't predict every single user. And so some users you need to give them control. And this is why a lot of companies do with data saver, right? So the data saver is basically a user says, "Listen, I need content, but I'm going to say the content I get, you can give me low fidelity media, which is usually the heaviest stuff. And I will choose when to go out of heavy media, or a low quality media, into the real quality media on my own".  And that can be really empowering because then you don't have to worry so much about  mixing perfectly that may frustrate the user. You give them the control. And I personally believe that you can never go wrong by giving the user control of their own experience, versus trying to facilitate to the average of millions of users.

 

Rui Costa:

Yeah, absolutely. Wow, so much to learn here. So I was thinking about looking at this, the challenge that I see is how do you measure this? So, how do you not only measure, but actually test what you're doing with respect to tackling these issues? 

 

Nolan O’Brien:

But this is probably where 90% of the work is, is measuring and understanding the problem space. You have to measure and understand users, and you have to measure and understand the technical challenges, and you have to measure and understand changes you want to make. I can't give you specifics, but what I can say is there are steps you can take. 

First identify the areas where you feel like there's a user experience problem, and try to measure from a user perspective process. Not look at every request like "Hey, these requests are fast, these are slow". Users don't care, which requests are fast or slow. They care that their experience is fast or slow. So try to take larger measurements holistically of the large experience. Whether it's  doing an action like putting together a composition and sending it off in an email or a post. Or if I'm doing something on the consumption side of navigating somewhere and wanting to consume it like an AR experience. 

If you can capture the users in those larger entailings of the actions they're doing and measure that, you can see if they're meeting the expectations that you want for them, which is fast. Right? And then you can start considering: Okay, we've recognized here, this particular part of user experience is not meeting users' needs. Matching it up with qualitative information, surveying users and seeing where they're at find the deficiencies like, "Hey, I'm in this region and I'm doing this particular thing is too slow for me". And then really from there measure, measure, measure to fully understand what you're going to do to improve that. Don't just say, it's here. It's a ten second experience. We need it to be three seconds. Well, where is every millisecond of that duration going? Every millisecond? Is it going into the DNS lookup and we have to optimize DNS. Is it going into the TCP handshake? Is it going into the compression or the transcoding side, or is it just the transfer of the data we need to compress better? Or is it in something in between? We have a bug somewhere? We're really getting to define where every single millisecond is being attributed for that thing that you're targeting so that you can start breaking it apart and solving for that. 

 

Rui Costa:

I'm listening and thinking that this is heavy lifting basically. So it's highly customized for every single problem that you're kind of faced with in the app, or like thinking you into to try to route it up. Is this the next challenge with respect to understanding performance from a user perspective, is trying ... ?

 

Nolan O’Brien:

There's two sides that I think work even before  you have to do that. That's [where] I feel like where you're going to get once you've finished the easy stuff, that's the hard spot. So that's where most of your time spent. But before you even get there, there is first principles building. So get in slow and nice, put yourself on a slow network. And build it and run it and experience it as a normal user would not a user on wifi, not a user on the brand new iPhone 11 pro. The user on an iPhone six, a user that has 2G connection and data costs them. You know, I mean 20 bucks a month and they just can't afford the amount of data you're using . Measure that locally and see where those large, low hanging fruits are to move forward, get qualitative data too. Whether it's internally at the company or whether it's externally with users, see where they're getting feedback on, where you're, you need to start focusing.

But once you pass that point of low hanging fruit, then you really do have to come up with a rigorous measurement system. And it doesn't have to be sort of like bespoke for everything. It's more conceptually the idea of an action that you want to measure for the user at a high level. And you measure that and you measure that for many, many actions. And then ones that you care about, you don't say, "Well, this action compared to this action". You just take them on their own and see if you can make those individual ones better, where they need to be improved. So some things take 10 seconds and they need to take 10 seconds. But some things take three seconds and they should take one second - really finding those things.

And then if you really want to solve those problems, because it can get really complicated. And especially, as you'll know, the global internet infrastructure is really complicated and it's a miracle that even works. Like the fact that I'm talking to you right now, it's magic because I understand exactly the BGP protocols and the routes the http connection's going through -  it's freaking miraculous that it isn't taken down every single day. Dealing with that miracle of the internet. You really have to understand every single second in that action. So you can start attacking it. And some things are gonna be shared, right? Like a lot of actions. Cause we'll share a lot of things.

So maybe you have a really big problem with your network communication to your own data centers. And you're like, "Listen, I am talking to my data center around the world is too slow. I have to get in between the user and the data center with something faster", just as a general example, there's so much you can do, you have to understand what the problem is first and sometimes solving for one will have a cascading effect, solving many things.

 

Rui Costa:

Cool. Interesting. Nolan, thank you so much. I believe it was very, enlightening I would say, this perspective. Would you like to leave us with a very short summary of your recommendations to everyone working on mobile app performance? 

 

Nolan O’Brien:

I would say - thanks for even listening. I'm just one guy working in this immense field. Don't be overwhelmed because performance is really, really big and it's not possible to tackle everything simultaneously. So pick an area. Start with some first principles and fundamentals, and target what you want to see and make those improvements. I really do like considering the user perspective first getting into the psychology of what that effect is and trying to not bias myself towards "Well, it's this amount of time and it should be this amount of time", when really that's not necessarily the goal. The goal is to understand what the user needs and sometimes it's another area you need to focus on instead.

 

Rui Costa:

It's putting the user hat instead of the engineer hat, I would say.

 

Nolan O’Brien:

It's one of the things that's very easy. It's like, "Listen, I've got this algorithm on my backend service and it's causing 50 milliseconds latency between my backend services and I can cut that down to 25 seconds". That's a 50% reduction, right? How I have done this over and over again of over optimizing a thing that does not matter. And that 50 milliseconds to 25 milliseconds, 50% improvement doesn't amount to squat when you have multiples of seconds between that content and the user.

So,  it's not easy. Sometimes you get stuck in a particular perspective as an engineer and you have to take a step back to reassess it. That's where you really need to focus. And do lots of things and not everything will work, but that's okay. Because if you care about the user, you'll do the right thing.

 

Rui Costa:

Thank you so much, Nolan. Thank you all for listening and see you next week. 

Hope you have enjoyed the conversation. Nolan had so much more to share with us. You can follow Nolan on Twitter. His handle is @NolanOBrien and please follow us in the usual podcast platforms as well as visit performancecafe.codavel.com.

See you next week.