OpenTelemetry with Shiny
Introduction
Understanding how your Shiny application behaves in production is critical for maintaining performance and reliability. While local development gives you some insight, production environments introduce complexities like concurrent users, varied network conditions, and unexpected usage patterns. OpenTelemetry provides a standardized way to collect observability data from your Shiny applications, helping you answer questions like:
- Why is my app slow for certain users?
- Which reactive expressions are taking the most time?
- How long does it take for outputs to render?
- What sequence of events occurs when a user interacts with my app?
Starting with Shiny v1.12, OpenTelemetry support is built directly into the framework, making it easier than ever to gain visibility into your applications at scale.
What is OpenTelemetry?
OpenTelemetry (aka OTel) describes itself as “high-quality, ubiquitous, and portable telemetry to enable effective observability”. It is an open-source observability framework that provides a vendor-neutral way to collect telemetry data from applications. OTel standardizes three types of observability data:
Traces: Show the path of a request through your application. In Shiny, a trace reveals how a user’s input change triggers a cascade of reactive calculations, ultimately updating outputs. Traces help you understand the sequence and timing of operations.
Logs: Detailed event records that capture what happened at specific moments, including errors, warnings, and informational messages.
Metrics: Numerical measurements collected over time, such as request counts, response times, or resource utilization.
These data types were standardized under the OpenTelemetry project, which is supported by a large community and many companies.
These data types were standardized under the OpenTelemetry project, which is supported by a large community and many companies. The goal is to provide a consistent way to collect and export observability data, making it easier to monitor and troubleshoot applications.
The OpenTelemetry ecosystem
OpenTelemetry is vendor-neutral, meaning you can send your telemetry data to various local backends like Jaeger, Zipkin, Prometheus, or cloud-based services like Grafana Cloud, Logfire, and Langfuse. This flexibility means you’re not locked into any particular monitoring solution.
We’ve been using Logfire internally at Posit to help develop OTel integration in many R packages and other applications. Throughout this article, you’ll see examples of OTel traces visualized in Logfire.
The image below shows an example trace in Logfire (left) from a Shiny app (right) that uses Generative AI to provide weather forecasts. The trace captures the entire user session, including reactive updates, model calls, and a tool invocation. We will explore this example in more detail later in the article.

OpenTelemetry in Shiny
You may be familiar with debugging tools like {reactlog} and {profvis} for measuring performance of your Shiny during local development. While these tools are invaluable for profiling and visualizing reactive execution, they are not designed for production use. These debugging tools are only to be used locally as they would be considered a memory leak in production. In addition to performance metrics, OTel can give you invaluable insight into how and where actual users are using your application–providing structured data for A-B testing or diagnosing issues that arise in real-world usage.
In addition, both {reactlog} and {profvis} do not provide a way to capture what happens in processes outside the main R process. For example, if your Shiny app uses {mirai} to run code asynchronously in background R processes, neither {reactlog} nor {profvis} can trace what happens in those background processes. But with OpenTelemetry, you can trace across process boundaries and coding languages, capturing the full picture of your app’s behavior.
OpenTelemetry allows us to record information at scale with minimal overhead, helping you answer previously impossible questions about your production environment. OpenTelemetry provides visibility into your app’s performance and behavior, helping you identify bottlenecks, debug issues, and optimize user experience—especially crucial when your app is deployed in production with real-world usage.
{shinyloadtest}
{shinyloadtest} it built to replay a set of headless-user actions to simulate stress testing for your server. While {shinyloadtest} may provide more timing detail out of the box for load testing, OpenTelemetry is designed for continuous observability in production environments. OTel captures real user sessions, traces cross-process interactions, and integrates with a wide range of monitoring backends.
Learn more about {shinyloadtest} in the Shiny Load Testing article.
Adding OpenTelemetry integration
OTel support is automatically enabled in Shiny once {otel} is able to record traces and logs.
To do this, let’s get started by installing the latest version of Shiny, {otel}, and {otelsdk}:
pak::pak(c("shiny", "otel", "otelsdk"))To enable OpenTelemetry tracing, you need set a few specific system environment variables to describe where your recordings are being sent. In the example below, we set them in an .Renviron file to point to Logfire.
.Renviron
# Enable OpenTelemetry by setting Collector environment variables
OTEL_TRACES_EXPORTER=http
OTEL_LOGS_EXPORTER=http
OTEL_LOG_LEVEL=debug
OTEL_METRICS_EXPORTER=http
OTEL_EXPORTER_OTLP_ENDPOINT="https://logfire-us.pydantic.dev"
OTEL_EXPORTER_OTLP_HEADERS="Authorization=<your-write-token>".Renviron
You can edit your app-specific environment variables by calling usethis::edit_r_environ(scope="project") from within your Shiny app project directory.
You’ll know your setup is enabled if otel::is_tracing_enabled() returns TRUE.
OpenTelemetry in action
Below is an example {shinychat} app with an {ellmer} tool to fetch realtime weather forecasts (via {weathR}, which uses {httr2}) for a given latitude and longitude. This simple (yet non-trivial) app helps us showcase what sort of information {shiny}, {ellmer}, and {httr2} can surface via OTel.
Gaining timing insights into applications that leverage Generative AI (GenAI) is critical to improving user experience. Without OpenTelemetry, if a user stated an app was slow, we would not be able to accurately determine if the slowness was due to the AI model request time, AI model streaming time, tool execution time, or even followup reactive calculations in Shiny.
app.R
library(shiny)
# Create tool that grabs the weather forecast (free) for a given lat/lon
# Inspired from: https://posit-dev.github.io/shinychat/r/articles/tool-ui.html
get_weather_forecast <- ellmer::tool(
function(lat, lon) {
weathR::point_tomorrow(lat, lon, short = FALSE)
},
name = "get_weather_forecast",
description = "Get the weather forecast for a location.",
arguments = list(
lat = ellmer::type_number("Latitude"),
lon = ellmer::type_number("Longitude")
)
)
ui <- bslib::page_fillable(
shinychat::chat_mod_ui("chat", height = "100%")
)
server <- function(input, output, session) {
# Set up client within `server` to not _share_ the client for all sessions
client <- ellmer::chat_claude("Be terse.")
client$register_tool(get_weather_forecast)
chat_server <- shinychat::chat_mod_server("chat", client, session)
# Set chat placeholder on app init
observe({
chat_server$update_user_input("What is the weather in Atlanta, GA?")
}, label = "set-default-input")
}
shinyApp(ui, server)You’ll notice that the app.R has no OpenTelemetry specific code. Only the system environment variables are needed to enable OTel support in Shiny and other packages that support OTel (like {ellmer} and {httr2}).
While not necessary for your app to function, it is recommended to provide labels to your observers and reactives to make them easier to identify in traces.
By default, Shiny will generate labels based on the variable it is being assigned to. For example, my_obs <- observe({...}) will have a label of "my_obs". The same effect can be done with observe({...}, label="my_obs").
In the example above, we explicitly set the label of the observe() that sets the default user input to "set-default-input".

When you run the app and interact with it, OpenTelemetry traces are automatically recorded and sent to your configured backend (Logfire in this case). Here’s an example trace from Logfire showing a user session interacting with the chat app and the weather tool:

The traces above recorded a single user session where the user asked for the weather in Atlanta, GA and then closed the app. The trace shows:
- The Shiny session lifecycle, including
session_startandsession_end - Many
{shinychat}chatmodule spans for handling user input and messages - Reactive updates triggered by changes in the
session’s input - An
ExtendedTaskspan for the computation of the AI agent response - 2x
chat claudespans representing calls to the AI agent model - A single
get_weather_forecasttool call being executed, including the HTTP requests made by{httr2}to fetch the weather data
Notice how the spans are nested, showing the relationship between user actions, required reactive calculations, and external API calls. This level of detail helps you understand exactly how your app is performing in production and where any bottlenecks or issues may arise.
What can Shiny record?
Shiny automatically creates OpenTelemetry spans for:
- Session lifecycle: When sessions start and end, including HTTP request details
- Reactive updates: The entire cascade of reactive calculations triggered by an input change or a new output to be rendered
- Reactive expressions: Individual calculations such as
reactive(),observe(),output, and other reactive constructs
Additionally, Shiny adds logs for events such as:
- Fatal or unhandled errors (with optional error message sanitization)
- When a
reactiveVal()orreactiveValues()value is set
Every span and log entry provided by Shiny includes the session ID (session.id) attribute, making it easy to filter and analyze data for specific user sessions.
Currently, no metrics (numerical measurements over time) are recorded by Shiny. However, {plumber2} (via {reqres} v1.1.0) has added OTel metrics: a counter of the number of active requests and histograms for request durations and request/response body sizes.
For more detailed information on configuring and using OpenTelemetry within R, check out the {otel} package documentation and how to set up record collection with {otelsdk}.
Fine-grained control
Automatic tracing is perfect to get started, but you may want more control over what gets traced. Shiny gives you that flexibility through the shiny.otel.collect option (which falls back to the SHINY_OTEL_COLLECT environment variable). You can set this option to control the level of tracing detail with the following values:
"none"- No Shiny OpenTelemetry tracing"session"- Track session start and end"reactive_update"- Track reactive updates (includes"session"tracing)"reactivity"- Trace all reactive expressions (includes"reactive_update"tracing)"all"[Default] - Everything (currently equivalent to “reactivity”)
With "all" being the default level of tracing, you may want to reduce the amount of Shiny spans/logs collected for large applications or production environments. Reducing the amount of Shiny spans/logs collected can help decrease the volume of telemetry data being sent to your backend, reducing costs.
For example, we can set the shiny.otel.collect option to "session" to only trace session start and session end events:
# Only trace session lifecycle, not every reactive calculation
options(shiny.otel.collect = "session")If you are going to add your own spans or logs using {otel}, you may want to reduce the amount of Shiny spans collected to "reactive_update". This will create the a span for every reactive update in addition to the session start/end spans. This level of tracing provides a high-level overview of user interactions without the noise of every individual reactive expression. Your custom spans/logs can then fill in the gaps for specific operations you care about.
Finally, a common use case is to remove spans/logs for specific parts of your app. For example, you may want to avoid tracing certain modules or observers that are not critical to your analysis. You can achieve this by temporarily setting the shiny.otel.collect option within a specific scope using shiny::withOtelCollect() or shiny::localOtelCollect(). The collect level must be set when creating the Shiny constructs (like modules, observe(), reactive(), etc.) to avoid tracing during the execution of the reactive expressions.
Recall from the previous chat app example within the server function:
server <- function(input, output, session) {
client <- ellmer::chat_claude("Be terse.")
client$register_tool(get_weather_forecast)
chat_server <- shinychat::chat_mod_server("chat", client, session)
observe({
chat_server$update_user_input("What is the weather in Atlanta, GA?")
}, label = "set-default-input")
}Instead, you could choose to not trace {shinychat}’s server module or the observer that sets the default user input:
server <- function(input, output, session) {
client <- ellmer::chat_claude("Be terse.")
client$register_tool(get_weather_forecast)
# Do not collect any Shiny OTel spans/logs
# for anything created within this block
withOtelCollect("none", {
chat_server <- shinychat::chat_mod_server("chat", client, session)
observe({
chat_server$update_user_input("What is the weather in Atlanta, GA?")
}, label = "set-default-input")
})
}This reduces the many spans created by {shinychat} and the observer, making it easier to focus on the parts of the app you care about.

Interpreting the traces
When looking at the trace for timing, you can see how long a model request took in the chat claude spans.

The gap between this span’s length and its parent’s length is how long the results took to stream back to the user or make a decision. For the overall user experience, the total time taken from input to output is represented by the ExtendedTask span, roughly 8 seconds in this case. Only a half of a second was spent in the tool call (something we as app authors could possibly optimize). The remaining 7.5 seconds was spent in the model response generation and streaming.
More generally, as users interact with your app, Shiny generates traces that generally look like this:
(log) Set reactiveValues: input$<value_name>
reactive_update
└── output: <output_name>
└── reactive: <reactive_name>
└── reactive: <inner_reactive_name>
This example implicitly shows a relationship between input$<value_name> changing, and <output_name> being re-rendered as a result of that change.
It explicity shows the chain of reactive expressions that were needed to be computed to produce the new value for <output_name>, including any nested reactives.
Understanding reactive dependencies
Traces reveal the dependency chain in your reactive graph. When an input changes, you can see:
- Which reactive expressions were invalidated
- The order in which they re-executed
- How long each computation took
- Which outputs were ultimately updated
This visibility is invaluable for identifying performance bottlenecks and understanding unexpected reactive behavior.
What gets traced?
Shiny’s OpenTelemetry integration automatically creates spans for key operations within your application.
Shiny automatically creates OpenTelemetry spans for different collect levels:
Currently, no metrics (numerical measurements over time) are recorded by Shiny. However, {plumber2} (via {reqres} v1.1.0) has added OTel metrics: a counter of the number of active requests and histograms for request durations and request/response body sizes.
Session lifecycle
Shiny traces the session lifecycle when the collect level is "session" or higher.
Every Shiny session generates a span marking when each session starts and ends: session_start and session_end. The session_start span will capture the execution of your app’s server function, including any initial reactive_update spans (if enabled) that run when the session begins.
Spans
session_start: When starting a new session and running theserverfunctionsession_end: When ending the session
Reactive updates
Shiny traces reactive updates when the collect level is "reactive_update" or higher.
When an input changes or an output needs to be re-rendered, Shiny creates a reactive_update span that encompasses the entire cascade of reactive calculations triggered by that change.
The reactive_update span starts when Shiny starts an output render or observe() calculation. The reactive_update span ends when all reactive expressions have resolved (including promise objects). Without async, this is equivalent to the time the main R process is busy (something to be minimized). When using async, it only measures when Shiny is aware of calculations needing to be finished (async) and when they are being computed. When trying to minimize user wait time, reduce the duration of spans outside ExtendedTask or {mirai} operations.
Spans
reactive_update: The entire cascade of reactive calculations triggered by an input change or a new output to be rendered
Reactive expressions
Shiny traces the execution of every reactive calculation when the collect level is "reactivity" or higher.
All "reactivity" spans have an optional module id prefix <mod_id>: if the reactive object is defined within a module. This prefix will automatically appear. Having the module id within the span name helps disambiguate reactives with the same name across different modules.
The span names for reactive, observe, and output expressions may inclue quantifiers such as cache (for cached reactives) or event (for event-driven reactives).
A completely enhanced reactive() span name would be similar to reactive cache event myModuleId:myReactive. It could be defined within the module with the ID, myModuleId, and be cached and event-driven via myReactive <- reactive({...}) |> bindCache(...) |> bindEvent(...).
Spans
reactive [cache ][event ][<mod_id>:]<name>:reactive()calculations. Calculate a new value only when their upstream values change.output [cache ][event ][<mod_id>:]<name>: All render functions (renderText(),renderPlot(), etc.) that produce output for the UI.observe [cache ][event ][<mod_id>:]<name>:observe()executions. Perform side effects in response to reactive changes.debounce [<mod_id>:]<name>: Debounced value being updatedthrottle [<mod_id>:]<name>: Throttled value being updatedreactivePoll [<mod_id>:]<name>:reactivePoll()value computationreactiveFileReader [<mod_id>:]<name>:reactiveFileReader()value computation given file changesExtendedTasks [<mod_id>:]<name>: Long-running background computations initiated viaExtendedTask().
Labels
Each OTel span has its label derived from the label of the reactive object. It is highly recommended to provide meaningful labels for your reactive expressions and observers to make traces easier to interpret.
By default, labels are generated based on the variable name they are assigned to.
# ❌ - label: `"x"`
x <- reactive({ ... })
# ❌ - label: `"args"`
args <- reactiveValues(...)
# ❌ - label: `"<anonymous>"`
observe({ ... })
# ❌ - label: `"<anonymous>"`
ExtendedTask$new(...)However, for better trace readability, you should explicitly set labels when possible:
# ✅ - label: `"chat_last_message"`
chat_last_message <- reactive({ ... })
x <- reactive({ ... }, label = "chat_last_message")
# ✅ - label: `"chat_statistics"`
chat_statistics <- reactiveValues(...)
# ✅ - label: `"update_user_input"`
update_user_input <- observe({ ... })
observe({ ... }, label = "update_user_input")
# ✅ - label: `"weather_forecast_task"`
weather_forecast_task <- ExtendedTask$new(...)Logged events
In addition to OTel spans, Shiny logs important events:
- Fatal errors:
Fatal error(fatallevel) - Unhandled errors:
Unhandled error(errorlevel) reactiveVal()andreactiveValues()values being set:Set reactiveVal [<mod_id>:]<name>andSet reactiveValues [<mod_id>:]<name>$key(infolevel)- An
ExtendedTaskobject calling$invoke()and the task being added to its queue:ExtendedTask [<mod_id>:]<name> add to queue(debuglevel)
The reactive val, reactive values, and ExtendedTask logs all require that the shiny OTel collect level be set to "reactivity" or higher.
All spans and log entries include the session ID, making it easy to filter and analyze data for specific user sessions.
Production considerations
When deploying Shiny apps with OpenTelemetry to production, consider these best practices:
Sanitize sensitive data
By default, Shiny sanitizes OTel fatal and unhandled error messages. If you want your error messages to be unsanitized, disable error sanitization:
options(shiny.sanitize.errors = FALSE)This option only pertains to unhandled or fatal Shiny error messages. Altering this option will not change any other text that may be logged or traced by your application code.
For more details on error sanitization, see the article on Sanitizing error messages.
Correlate with other metrics
OpenTelemetry traces work best when combined with other observability data:
- Server metrics (CPU, memory, network)
- Application logs
- Custom business metrics
This holistic view helps you understand not just what is slow, but why.
Advanced usage
Custom spans
While Shiny automatically instruments the reactive graph, you can add custom spans for application-specific operations. They will appear as children of Shiny’s automatic spans in your trace view.
expensive_calculation <- reactive({
# Start a custom span for this calculation
otel::start_local_active_span("my custom span")
# Simulate an expensive operation
Sys.sleep(2)
input$n ^ 2
})
To avoid tracing the intermediate reactive expressions (such as reactive expensive_calculation), you can use withOtelCollect("none", {...}) to disable Shiny OTel spans/logs within that block:
app.R
library(otel)
ui <- bslib::page_fillable(
title = "My App",
sliderInput("n", "N", 0, 100, 20),
verbatimTextOutput("n_squared")
)
server <- function(input, output, session) {
# Do not trace _Shiny_ reactive expressions within this block
# All other otel spans/logs will still be recorded
withOtelCollect("none", {
expensive_calculation <- reactive({
# Start a custom span for this calculation
otel::start_local_active_span("my custom span")
# Simulate an expensive operation
Sys.sleep(2)
input$n ^ 2
})
})
output$n_squared <- renderText({
n_squared <- expensive_calculation()
paste0("n * 2 = ", n_squared)
})
}
shinyApp(ui, server)Now your custom span will appear in the trace without the intermediate Shiny reactive spans cluttering the view.

Async tracing with OpenTelemetry
OpenTelemetry is designed for production observability. It excels at:
- Capturing cross-process and cross-language traces (e.g., async R processes, external API calls)
- Recording only the desired spans and logs (not every reactive execution or full function call stack)
- Exporting to an external backend for long-term storage and analysis, not stored within persistent memory
Lets adjust the chat example to use {mirai} (an R computation manager) for an async tool call. We can see how OpenTelemetry captures spans across process boundaries as {mirai} runs the tool in a background R process:
app.R
library(shiny)
# Set up mirai daemons to handle background processing
# This allows us to run long-running tasks without blocking the Shiny app
mirai::daemons(2)
# Same tool as before, but `get_weather_forecast` now
# uses `{mirai}` to calculate in background
get_weather_forecast <- ellmer::tool(
function(lat, lon) {
# Compute weather forecast within a background R process
mirai::mirai(
{
weathR::point_tomorrow(lat, lon, short = FALSE)
},
lat = lat,
lon = lon
)
},
name = "get_weather_forecast",
description = "Get the weather forecast for a location.",
arguments = list(
lat = ellmer::type_number("Latitude"),
lon = ellmer::type_number("Longitude")
)
)
# Unaltered `ui` and `server`
ui <- bslib::page_fillable(
shinychat::chat_mod_ui("chat", height = "100%")
)
server <- function(input, output, session) {
client <- ellmer::chat_claude("Be terse.")
client$register_tool(get_weather_forecast)
withOtelCollect("none", {
chat_server <- shinychat::chat_mod_server("chat", client, session)
observe({
chat_server$update_user_input("What is the weather in Atlanta, GA?")
}, label = "set-default-input")
})
}
shinyApp(ui, server)In the resulting OpenTelemetry trace, you’ll see spans for the Shiny session, reactive updates, the extended task for the AI response, and importantly, spans for the async tool execution in the background R process via {mirai}. This cross-process tracing is something neither {reactlog} nor {profvis} can provide.

For more details on using {promises} with {otel}, check out the OpenTelemetry reference in the {promises} package.
Attributes
Existing attributes
Shiny automatically adds a session.id attribute to every span or log recorded. This session.id corresponds to the session$token value for the Shiny session, allowing you to filter and analyze all telemetry data for a specific user session.
In addition to the session.id, Shiny also adds code attributes to reactive expression spans. With the combination of cold.filepath, code.lineno, and code.column attributes, OpenTelemetry viewers can provide you a file path to copy into your IDE to take you directly to your code’s origin.

Adding attributes
It is possible to enhance existing spans with custom attributes:
library(otel)
server <- function(input, output, session) {
observe({
# Add custom attributes to the current active span
# only if tracing is enabled
if (otel::is_tracing_enabled()) {
ospan <- otel::get_active_span()
ospan$set_attribute("user_role", "admin")
ospan$set_attribute("app_version", "1.2.3")
}
# Your observer logic...
})
# Your app logic...
}However, for more control, it is recommended to create your own custom spans as you can control exactly which spans get the attributes and when the span starts/ends:
library(otel)
server <- function(input, output, session) {
# Disable only the Shiny OTel spans/logs within this block
withOtelCollect("none", {
observe({
# Add custom attributes to a custom otel span
otel::start_local_active_span(
"custom observer span",
attributes = otel::as_attributes(list(
user_role = "admin",
app_version = "1.2.3"
))
)
# Your observer logic...
})
})
# Your app logic...
}Analyzing traces
While viewing traces in your observability platform’s UI is helpful for real-time debugging, you may want to download and analyze trace data programmatically for deeper analysis, reporting, or custom visualizations. The Logfire API provides a SQL-like query interface for retrieving traces and spans.
Querying traces with httr2
In the example below, we use {httr2} to query the Logfire API and retrieve traces for analysis. Here’s how to download spans for a specific session:
The LOGFIRE_API_READ_TOKEN environment variable should contain a your Logfire read token. A Logfire API read token is different from a write token used for exporting traces.
You can generate read tokens in your Logfire project settings.
library(httr2)
library(dplyr)
# Logfire API read token
# Format: "Authorization=<your-read-token>"
logfire_read_token <- Sys.getenv("LOGFIRE_API_READ_TOKEN")
# Query for all spans from a specific session
session_id <- "8dec1d69f1f456c123c5ac10abb83d63"
query <- sprintf(
"SELECT * FROM RECORDS WHERE attributes->>'session.id' = '%s'",
session_id
)
query <- "
SELECT date_trunc('hour', start_timestamp) as hour,
COUNT(DISTINCT attributes->>'session.id') as session_count
FROM records
WHERE span_name = 'session_start'
GROUP BY hour
ORDER BY hour DESC
LIMIT 50
"
# Make the API request
response <-
httr2::request("https://logfire-us.pydantic.dev/v1/query") |>
httr2::req_method("GET") |>
httr2::req_headers("Accept" = "text/csv") |>
httr2::req_auth_bearer_token(logfire_read_token) |>
httr2::req_url_query(sql = query) |>
httr2::req_perform() |>
httr2::resp_body_raw()
session_dt <-
response |>
readr::read_csv() |>
select(
span_id,
trace_id,
kind,
level,
parent_span_id,
span_name,
start_timestamp,
end_timestamp,
duration # seconds
)
session_dt
#> # A tibble: 5 × 9
#> span_id trace_id kind level parent_span_id span_name start_timestamp end_timestamp duration
#> <chr> <chr> <chr> <dbl> <chr> <chr> <dttm> <dttm> <dbl>
#> 1 55c9efdd56810779 21f70e81038967… span 9 NA reactive… 2025-12-04 17:00:53 2025-12-04 17:00:53 0.0210
#> 2 84f7f7a9ac9cc4de 21f70e81038967… span 9 55c9efdd56810… Extended… 2025-12-04 17:00:53 2025-12-04 17:01:07 13.8
#> 3 8b9246e74bc00329 21f70e81038967… span 9 84f7f7a9ac9cc… reactive… 2025-12-04 17:01:07 2025-12-04 17:01:07 0.00151
#> 4 0b3566c1efc11ee8 573a9fe611b582… span 9 NA session_… 2025-12-04 17:00:50 2025-12-04 17:00:50 0.0228
#> 5 281611d255f995cb d4379535673f86… span 9 NA session_… 2025-12-04 17:01:12 2025-12-04 17:01:12 0.00256
# Find _long_ spans (> 1 second)
session_dt |>
filter(duration > 1) |>
tibble::glimpse()
#> Rows: 1
#> Columns: 9
#> $ span_id <chr> "84f7f7a9ac9cc4de"
#> $ trace_id <chr> "21f70e81038967c027b70dce2afdee1e"
#> $ kind <chr> "span"
#> $ level <dbl> 9
#> $ parent_span_id <chr> "55c9efdd56810779"
#> $ span_name <chr> "ExtendedTask <anonymous>"
#> $ start_timestamp <dttm> 2025-12-04 17:00:53
#> $ end_timestamp <dttm> 2025-12-04 17:01:07
#> $ duration <dbl> 13.81167With this approach, you can programmatically retrieve and analyze trace data for your Shiny applications. You can extend this example to:
- Filter spans by session ID, span name, or any other attribute
- Calculate custom metrics like total session duration or slowest operations
- Create custom visualizations or reports
- Export data for further analysis in other tools
Common queries
The Logfire website only allows you to filter existing records (SELECT * from RECORDS WHERE ...). Here are some useful queries for analyzing your Shiny app’s traces that leverage the full power of their SQL submitted through the API:
The Logfire API uses PostgreSQL syntax for queries. You can use standard SQL operations including WHERE, JOIN, GROUP BY, ORDER BY, and aggregate functions. The attributes column is a JSONB type, allowing you to query nested attributes using the ->> operator.
Find all sessions in the last hour
Query:
SELECT
DISTINCT attributes->>'session.id' as session_id
FROM records
WHERE start_timestamp > now() - interval '1 hour'
LIMIT 100Data returned:
# A tibble: 14 × 1
session_id
<chr>
1 e2b710d19c0f620781a25ef0d37508bd
2 46e3b5d306a87bdc90a31ac9d4e44c0e
3 f1aa669a30f23f3b3305bd63be02eb91
4 6e3a57c52791d53130652866d8c6b4f5
5 db69ef7b705789452280d245a45f0355
6 ae8b245812c1ebaedc7b5e00e885bca1
7 b8d97646957be248e3176b02687ed883
8 3dd521dffdc762a586b8cc6344bf2f9d
9 d02602118b1d55ebd3ec96e5f914825e
10 ada4b860e4d9d50e9e6a1a3bcbdb89ff
11 5d96c649339a685953e554b2937f35d7
12 1f71d2ae2336b2d10712457a56cca3dc
13 30d44ecb6dc577cbd1460938141ab26f
14 aa7af4b252023f5290175bce11af50c9
Find slowest reactive expressions
Query:
SELECT
span_name as reactive_name,
AVG((end_timestamp - start_timestamp)::numeric) / 1000 / 1000 as avg_duration_sec
FROM records
WHERE otel_scope_name = 'co.posit.r-package.shiny'
GROUP BY reactive_name
ORDER BY avg_duration_sec DESC
LIMIT 10Data returned:
# A tibble: 10 × 2
reactive_name avg_duration_sec
<chr> <dbl>
1 ExtendedTask <anonymous> 11.5
2 session_start 4.76
3 output n_squared 2.00
4 reactive expensive_calculation 2.00
5 ExtendedTask mock-session:<anonymous> 1.58
6 reactive_update 0.561
7 ExtendedTask mymod:rand_task 0.115
8 observe mymod:proms_observer 0.0240
9 observe event mymod:invoke_rand_task 0.0190
10 observe event chat:on_chat_user_input 0.0124
Count sessions by hour
Query:
SELECT
date_trunc('hour', start_timestamp) as hour,
COUNT(DISTINCT attributes->>'session.id') as session_count
FROM records
WHERE span_name = 'session_start'
GROUP BY hour
ORDER BY hour DESC
LIMIT 50Data returned:
# A tibble: 8 × 2
hour session_count
<dttm> <dbl>
1 2025-12-08 18:00:00 1
2 2025-12-04 17:00:00 1
3 2025-12-04 16:00:00 6
4 2025-12-03 16:00:00 2
5 2025-12-03 15:00:00 3
6 2025-12-01 20:00:00 1
7 2025-12-01 19:00:00 6
8 2025-11-13 21:00:00 14
Learn more
OpenTelemetry integration in Shiny provides powerful observability for production applications. For more information:
{otel}package documentation{otelsdk}setup guide- OpenTelemetry documentation
- Shiny v1.12 release post
For related topics on improving Shiny apps: