TLDR: Social network apps are naturally focused on their users’ engagement, but with a fast-growing user base that is getting more impatient, it is paramount to ensure a fast and frictionless experience. Optimizations such as using a CDN or pre-fetching images are helpful but are not enough. Recent performance results from industry leaders prove what we already feel every day as a user: the wireless last mile makes the user experience highly inconstant. The problem relies on the HTTP inability of handling the natural instability of wireless links. At Codavel, we ran our own testing to evaluate HTTP performance and, like many others in the industry, what we see is a long tail of high fetching times for images and video.
Performance and engagement - Social network apps
With around 3.5 billion active users every day, social network apps make today 80% of the total time spent on mobile devices all around the world. And when users reach for their mobile devices to open their favorite social app, they expect nothing less than high performance. 76% of the users place fast and responsive apps as the most important attribute of user experience, which aligns with highlighted lack of patience - more than 3 seconds of loading, and 53% of the users say they would leave. This is especially important for social networks, where the goal is to keep users engaged at all times with different types and sizes of content that are expected to never stop showing up, all with amazing quality - from images to videos or even comments. Take for example the even more dramatic case of Snapchat: they found that if “snap” took 2 seconds or more to load, all the users either move on to the next “snap”, or just leave the app.
Several social network industry leaders have been highlighting the benefits of performance investments, stating that “improving performance can actually drive usage” as in Instagram’s case, or “a faster app encourages more engagement” as commented by Pinterest engineers. Continuing with the Pinterest case, they reported seeing “a consistent pattern”: when the home feed loads faster, they observe higher engagement, with more users scrolling past the first page of posts and an overall increment of the number of posts viewed.
In another example from Instagram, by increasing performance, they saw “app-wide impressions across all mobile platforms increase by 0.7%, and likes increase by 0.4%. This was driven by increases of impressions on all surfaces. For instance, “Explore Photos” impressions increased by over 3%, and user profile scrolls increased by 2.7%. These trends continued over time, confirming that good performance brings users back.”
There is still a (big) problem: high and unstable latency
There are many factors that can impair overall social engagement KPIs, that go from the quality of the content to the implemented CDN. But undoubtedly one of the most important ones is how many of your users are being affected by high network latency. Network latency has a high impact on the performance of any application, and the problem is that it is very hard to control and predict, as it is strongly dependent on the quality of the network link, which, as we will see, is still an ongoing problem.
Although not being a social network app, Uber perfectly suits our case here, as it’s known as a highly optimized app built on top of a very robust infrastructure. And what they see is not good news - they experience high network response times across the globe, with the scenario being even more challenging in regions like India, Southeast Asia, or South America.
Uber - Tail-end latencies across the major cities
This happens because, in contrast with wired networks, wireless networks have unique characteristics and challenges, as being highly susceptible to losses from interference and signal attenuation that result in much higher (e.g: 4-10x) and variable round-trip times (RTTs) and packet loss, when compared to wired counterparts. This results in not only high loading time but also highly variable and inconsistent network response times, being possible to observe impactful variations depending on the carrier, location, or even time of the day.
Uber - Tail-end latencies, variations between carriers, time, and days
It’s not only Uber identifying these problems. Quoting the notorious Akamai study on unlocking mobile applications performance, “Cellular connections, which currently comprise about 38% of all mobile access on the Akamai network, are particularly challenging. They not only are subject to high latency, but also suffer from highly variable congestion rates – even on relatively fast 4G networks – resulting in uneven end-user response times. This makes it difficult for mobile app developers to deliver the consistent, high-performance experience their users want.” Or even on a post from Twitter Engineering Blog, “in many locations, latency is too high for users to consistently access their Twitter content”.
The one to blame - HTTP is not suited for mobile apps
HTTP was designed in the early 1990s as an application layer protocol that is sent over TCP. More recently in 2015, its second version, HTTP/2 was released, being presented as a faster, more reliable and efficient version of the old HTTP. But when it comes to today’s needs and to the increasing growth and demand for mobile content, even this last version seems to have its struggles. As stated by Fastly, “while HTTP/2 performs better than HTTP/1 in most situations, it isn’t as good when connections are bad — such as a high-loss connection, or one with a bit of loss and a lot of latency”. Uber Engineering Blog, also reinforces this idea - “Our mobile apps require low-latency and highly reliable network communication. Unfortunately, the HTTP/2 stack fares poorly in dynamic, lossy wireless networks.”
You may think “ok, let’s optimize HTTP then”. Well, that doesn’t seem an option, as implies “forcing a “selective penalty” on some users, for improving the content delivery for others”, as stated on Linkedin’s engineering blog. In other words, it can’t be optimized to deliver a faster and consistent experience for everyone, without destroying the experience for some.
Conducting our own HTTP testing - Codavel
At Codavel, we decided to take a deeper look at this phenomenon, focusing on the content delivery for the feed of a social network app, in India. We analyzed the performance data of 852 user sessions (from 20 different users), where the action consisted of opening the app feed and scrolling down, amounting to a total of 26 images and 7 video requests. The average image size is 38,020 bytes, and the average video request size is 519,688 bytes.
Starting with images, the average time to fetch an image on a social feed was 352ms, whereas the median was 242ms. The fact the average is significantly higher than the median is a sign of a problem: the distribution is skewed towards the right, with a long tail of high image fetching times.
In the case of video content, the results were similar. The time to fetch a video segment took on average 741ms, whereas the median was 439ms: as with images, we observe a long tail of high video fetching times.
Our take on the problem
As we have seen, HTTP is incapable of handling the natural instability of WiFi or 4G links and has been undermining the ability to deliver a frictionless and high-quality experience for millions of social app users.
The mobile app industry has already identified the need to replace existing HTTP with new protocols prepared to handle the volatility of wireless links. This is why industry leaders like Google, Facebook, or Uber have been investing heavily in a new protocol, QUIC, which evolved to HTTP3. However, QUIC brings two major drawbacks: it requires an engineering and operational nightmare to use it, and QUIC performance improvements are somewhat limited. I'll spare you the details here, but in case you're curious, take a look here.
At Codavel, we devote ourselves to this fundamental challenge: building a mobile-first protocol, robust to wireless link instability, that can be used effortlessly by mobile apps. That is what you get with Codavel Performance Service and our groundbreaking protocol, Bolina, that clearly outperforms both HTTP and QUIC / HTTP3.To prove it, we used the same methodology as in the aforementioned performance results for HTTP, and conducted an analysis on the behavior of Codavel Performance Service, with respect to the delivery of content to a social network app feed for users in India. The obtained results proved two key significant improvements with respect to fully optimized HTTP: more content delivered for the same time interval and more user sessions with low loading times, which leads to significantly more posts seen and higher user engagement overall.
Want to know more about our testing and results?
Download the report here and talk with us,
we can help you achieve the same results!