---
title: "Device performance signals to log for mobile observability"
slug: "mobile-logging-device-performance-signals"
blurb: "Mobile teams need device-level visibility. Learn which performance signals—memory pressure, network behavior, and thermal state—reveal hidden mobile blindspots."
metaDescription: "Mobile requires a different approach. Learn what device performance signals mobile engineering teams should log for a proactive observability strategy."
cover:
  url: "/assets/posts/mobile-logging-device-performance-signals/feature-cover-desktop@1x.webp"
  alt: "Device performance signals to log for mobile observability"
socialThumbnail:
  url: "/assets/posts/mobile-logging-device-performance-signals/feature-cover-desktop@1x.webp"
  alt: "Device performance signals to log for mobile observability"
author:
  - "collin"
tags:
  - "observability"
  - "mobile"
publishedDate: "2026-04-23T00:00:00.000Z"
modifiedDate: "2026-04-23T00:00:00.000Z"

---

## Mobile apps are guests on the device; respect the environment

Unlike servers, mobile apps run on hardware with constantly shifting constraints: limited memory, unpredictable networks, and thermal throttling. Monitoring system signals like memory pressure, network usage, and device temperature helps explain performance issues that traditional crash or latency metrics often miss.

In addition to logging important user experience signals (which I wrote about in [my last post](https://blog.bitdrift.io/post/mobile-logging-ux-signals)), mobile observability requires logging device performance signals like memory pressure, network behavior, and thermal state.

Here are some device-level signals every mobile team should be monitoring.

## Memory pressure

One of the most common mobile performance killers is memory pressure. Memory leaks on mobile are often misdiagnosed as random crashes. When the OS kills your app because it’s resource hungry, it doesn’t always look like a crash in your logs.

To detect memory-related issues early, mobile teams should monitor metrics such as:

- memory footprint over time
- OS low memory warnings
- foreground vs background memory usage
- memory growth across a user session

If memory usage steadily increases during a session without returning to baseline, it’s often a sign of a leak. Identifying this trend early can prevent OS-level out-of-memory (OOM) terminations that are difficult to diagnose after the fact. Tracking memory pressure events and low memory warnings from the OS allows you to see the “cliff” before your users fall off it.

This is another pattern we’ve worked hard to surface at bitdrift by capturing deep memory footprint metrics during sessions, along with app lifecycle context. We did a deep dive on how we handle this [here](https://blog.bitdrift.io/post/memory-leaks), but the short of it is: *We can show you a memory usage graph for every captured session.* bitdrift provides deep memory footprint monitoring and it can even capture if the app is in the foreground or background.

## Network behavior

Mobile networks are unpredictable. A request that performs perfectly in the office on Wi-Fi may behave very differently on a congested cellular network.

Many teams track network latency, but a single latency metric rarely tells the full story. To understand how network conditions affect user experience, teams should monitor signals such as:

- request latency distribution (P50, P95, P99)
- network type (Wi-Fi vs cellular)
- request retries and failures
- bandwidth consumption per session

Bandwidth usage is particularly important on mobile. Some apps unknowingly burn through a user’s data plan due to aggressive telemetry, large payloads, or chatty third-party SDKs.

Another common blind spot is treating a network request as a single latency number. In reality, it includes multiple phases: DNS lookup, TCP/TLS handshake, request transmission, and response time. Without visibility into those steps, it’s difficult to determine whether a slowdown is caused by the backend, the network, or the device itself.

When we built bitdrift, we wanted to expose this deeper level of network visibility. Most tools tell you if an API call failed; bitdrift tells you how much that API call cost the device by looking at:

- **Bytes Per Minute:** We monitor network consumption with high granularity. This allows you to identify “bandwidth hogs.” In practice, these are often 3rd party SDKs or even other observability tools that are over collecting data and burning through the user’s data plan.
- **Network Visibility:** It’s excellent that some teams are tracking client-side network latency. But treating a network request as a single latency metric can lead to, you guessed it, massive blindspots on mobile. We designed bitdrift to break down the socket’s life into its actual components. We do this by tracking the fetch initialization, DNS resolution, TCP/TLS handshakes, and response latency in high resolution. By capturing and plotting these in a waterfall chart, we let you definitively distinguish between an environmental network constraint and a backend performance issue.

## Thermal state tracking

Mobile devices dynamically adjust CPU performance based on temperature. When a phone starts running hot, the operating system throttles CPU performance to protect the hardware.

This means an app that performs perfectly under normal conditions can suddenly feel slow if the device is under thermal pressure. Mobile teams should monitor signals like device thermal state and correlate them with performance regressions to understand whether slowdowns are caused by software or by environmental constraints.

*A phone that is running hot will throttle the CPU, making your perfectly optimized code run like a turtle.*

That’s why bitdrift logs the thermal state of the device. If you see a cluster of performance issues, you can quickly check if they are correlated with high device temperatures, helping you distinguish between a software bug and environmental hardware constraints.

## Summary

Device performance signals are a critical piece of the mobile observability puzzle. In [Part 1](https://blog.bitdrift.io/post/mobile-logging-ux-signals) of this series, we explored the user experience signals that reveal how your app feels to users. In this post, we looked at the device-level constraints – memory pressure, network variability, and thermal throttling — that often explain why performance degrades in the real world.

In the final post of this series, we’ll examine application behavior and contextual UX signals, including lifecycle transitions, feature flag exposure, and session replay, which provide the missing context needed to reproduce and resolve the hardest mobile bugs.

---

## Frequently asked questions

### What is memory pressure in mobile apps?

Memory pressure occurs when an app consumes too much memory relative to what the device can support. This can lead to slowdowns, background kills, or out-of-memory (OOM) terminations. Monitoring memory usage over time helps identify leaks and prevent crashes that are otherwise difficult to diagnose.

### Why do mobile apps sometimes crash without clear error logs?

Not all crashes are logged as traditional exceptions. In many cases, the operating system terminates the app due to resource constraints like memory pressure. These OS-level kills often leave little to no diagnostic information unless teams are proactively tracking device performance signals.

### How does network variability impact mobile performance?

Mobile networks are inherently unstable and can vary based on location, congestion, and connection type (Wi-Fi vs cellular). This variability affects request latency, retries, and failures. Without logging detailed network behavior (including DNS, connection setup, and response time) it’s difficult to pinpoint the root cause of performance issues.

### What is thermal throttling and why does it matter?

Thermal throttling occurs when a device reduces CPU performance to prevent overheating. This can make an app feel slow even if the code is well-optimized. Tracking device thermal state helps teams distinguish between performance issues caused by software and those caused by environmental conditions.
