PricingDocs
profile picture for Collin

Collin

User experience signals to log for mobile observability

Mobile apps fail in ways traditional observability tools can't detect. Learn what UX signals — frame drops, workflow latency, and silent timeouts — your team should be logging to get ahead of the friction that crashes dashboards miss.

User experience signals to log for mobile observability

The green dashboard fallacy

Every mobile engineer has experienced this scenario in some form or another: your backend health is green, your crash free rate is at 99.9%. Yet, App Store reviews are flooding in about the app being "slow" or about a broken experience users encountered. Here's the problem: Mobile apps fail in ways traditional observability tools can't detect and dashboards can't see. Teams log for the disaster (the crash) but they're blind to the friction (the jank). The approach is reactive, fragmented, and heavily sampled. To build a high performance mobile app in 2026, you need to log the signals that explain how the app behaves on a real device. Your strategy should shine a light on the mobile-specific blind spots that servers simply don't have. In this three-part series, we'll unpack the most important signals mobile teams should be capturing, starting with user experience (UX) signals.

User experience (UX) signals

The first blindspot in mobile observability is that users experience apps as interactions, not metrics. In other words: Users don't care about your uptime; they care about how the app feels. Here are some specific metrics to help you understand user experience.

Frame drops and UI jank

Mobile users immediately notice frame drops. Even small stutters can make an app feel unreliable. This is one of the biggest blind spots on Android: micro stutters or freezes. Aside from being annoying for end users, Google Play penalizes apps' Play Store rankings if they exceed their threshold and your app might even get a warning label on your Google Play store page. Scary, right? To make it worse, you might know your app feels laggy, or you might see a generic slow rendering percentage in Google Play Console, but how do you actually fix that? Most tools only show aggregate rendering metrics. They don't tell you:
  • which screen dropped frames
  • what the UI state was
  • what the device was doing at the time
The good news is that libraries like Android JankStats do expose this information at the frame level. When you log these events with context (screen name, device state, network state), you can pinpoint exactly why rendering fell below 60fps. That's why we've integrated bitdrift directly with the Android JankStats library. This allows us to track every single frame drop and, crucially, attach UI State to it. By capturing the specific UI state at the exact millisecond a frame exceeds 16ms, you can identify and fix the issues that actually impact your Google Play Store ranking. It's the difference between guessing where a bottleneck lives and having a frame-by-frame receipt of why your app isn't hitting its 60fps target.

"Intent-to-action" workflows

Mobile users don't experience apps as isolated functions. They experience workflows. A user taps "Add to Cart," submits a login form, or starts checkout and expects something to happen immediately. When those workflows feel slow or fail silently, users notice. Yet many observability tools focus on backend latency or crashes, leaving the client-side gap between user intent and visible result largely unmeasured. Mobile teams should instrument the start and end of key user actions to track workflow latency, completion rate, and timeouts. Measuring the time between events like "Add to Cart tapped" and "Cart updated" reveals whether real user interactions are getting faster or slower across releases. Capturing outcomes (success, retry, failure) and basic context like device model, network type, and UI state helps engineers understand why a workflow is degrading. These signals often surface issues such as network stalls, UI deadlocks, or backend slowdowns that never show up in crash dashboards but still damage the user experience. This is another area we've looked closely at as we built our mobile observability solution. With bitdrift, measuring the time between any two logs is trivial. You don't need to wrap your code in complex stopwatches or spans. If you have a log when a user taps "Add to Cart" and another when a "Success" message appears, you can create a dynamic span in the bitdrift dashboard using workflows to measure that duration across your entire fleet. The best part is that these dynamic spans don't require a new app store release and you can tweak them to broaden the trigger/end point. You have full control. Here's a code sample of how you can add a custom log like "Add to Cart" to your code using bitdrift:
kotlin

import android.os.SystemClock
import io.bitdrift.capture.Capture
import java.util.UUID

fun onAddToCartClicked(productId: String, price: Double) {
    val opId = UUID.randomUUID().toString()
    val startedAtMs = SystemClock.elapsedRealtime()

    // 1) Start marker: user intent + context
    Capture.Logger.logInfo(
        fields = mapOf(
            "event" to "cart_add_initiated",
            "op_id" to opId,
            "product_id" to productId,
            "price_usd" to price.toString(),
            "ui_state" to "product_detail_view",
            "interaction_type" to "tap",
        ),
    ) { "Cart.AddInitiated" }

    cartRepository.addToCart(productId) { success, errorCode, errorMessage ->
        val durationMs = (SystemClock.elapsedRealtime() - startedAtMs).toString()

        if (success) {
            // 2) Success marker: bitdrift now has a clear 'intent-to-action' delta
            Capture.Logger.logInfo(
                fields = mapOf(
                    "event" to "cart_add_success",
                    "op_id" to opId,
                    "product_id" to productId,
                    "duration_ms" to durationMs,
                ),
            ) { "Cart.AddSuccess" }
        } else {
            // 3) Non-fatal failure marker: Captured with error context
            Capture.Logger.logWarning(
                fields = mapOf(
                    "event" to "cart_add_failed",
                    "op_id" to opId,
                    "duration_ms" to durationMs,
                    "error_type" to "network_timeout",
                    "error_code" to (errorCode ?: "unknown"),
                    "error_message" to (errorMessage ?: "unknown"),
                    "was_retry" to "false",
                ),
            ) { "Cart.AddFailed" }
        }
    }
}
And here's what the resulting log would look like:
json

{
  "message": "Cart.AddFailed",
  "level": "warning",
  "timestamp": "2026-03-03T17:41:06Z",
  "fields": {
    "event": "cart_add_failed",
    "op_id": "f47ac10b-58cc-4372-a567-0e02b2c3d479",
    "product_id": "sku_9921_x",
    "price_usd": "45.00",
    "duration_ms": "1240",
    "error_type": "network_timeout",
    "error_code": "504",
    "error_message": "Gateway Timeout",
    "was_retry": "false"
  },
  "context": {
    "device_model": "Pixel 8 Pro",
    "os_version": "Android 14",
    "network_type": "cellular",
    "thermal_state": "nominal",
    "memory_usage_mb": "412",
    "session_id": "sess_882194",
    "app_lifecycle": "foreground"
  }
}

The "silent" timeout

Workflows aren't just for successes. You can and should also be tracking timeouts. What happens if a user starts a checkout but the order_confirmed log never fires? One of the most damaging mobile experiences is the silent timeout:
  • User taps a button
  • A loading spinner appears
  • Nothing ever finishes
Because these events don't crash the app, they often go completely undetected. Mobile teams should define expected completion windows for critical workflows. If a workflow does not complete within that window, it should be treated as a failure event. This helps identify:
  • Backend stalls
  • Lost network requests
  • UI deadlocks
  • Dropped callbacks
Bonus: this is another area where bitdrift can help. With bitdrift's Timeout Matcher, you can set a workflow to trigger an action, like "record session", if a specific sequence of logs doesn't complete within a set timeframe (e.g., 15 seconds). This will help you catch the "spinning loading wheel of death" use cases that don't technically crash but definitely cause churn. Maybe even more important: it also means you get the full user session allowing you to see exactly what happened to the user during this eternity…errr 15 seconds.

Summary

Crash rates don't tell you how your app actually feels to users. Signals like frame drops, workflow latency, and silent timeouts reveal the friction that traditional observability tools often miss. Logging these UX signals is the first step toward understanding what's really happening on user devices. Next in this series, we'll look at device performance signals: the memory, network, and thermal metrics that explain why perfectly good code can suddenly slow down in the real world. Interested in learning more? Check out the sandbox or start a free trial to see what working with Capture is like. You can also get in touch for a demo or join us in Slack to ask questions and share feedback.

Stay in the know, sign up to the bitdrift newsletter.

Author


profile picture for Collin

Collin