Tool Calling UI in shinychat

Rich tool calling displays are now available in shinychat!
Authors

Garrick Aden-Buie

Carson Sievert

Barret Schloerke

Published

November 20, 2025

We’re jazzed to announce that shinychat now includes rich UI for tool calls! shinychat makes it easy to build LLM-powered chat interfaces in Shiny apps, and with tool calling UI, your users can see which tools are being executed and their outcomes. This feature is available in shinychat for R (v0.3.0) and shinychat for Python (v0.2.0 or later).

install.packages("shinychat")
pip install shinychat

This release brings tool call displays that work with ellmer (R) and chatlas (Python). When the LLM calls a tool, shinychat automatically displays the request and result in a collapsible card interface.

In this post we’ll cover the new Tool calling UI features, how to set them up in your apps, and ways to customize the display. We’ll also highlight some chat bookmarking support and other improvements in shinychat for R v0.3.0. As always, you can find the full list of changes in the R release notes and Python release notes.

Tool calling UI

Tool calling lets you extend an LLM’s capabilities by giving it access to functions you define. When you provide a tool to the LLM, you’re telling it “here’s a function you can call if you need it.” The key thing to understand is that the tool runs on your machine (or wherever your Shiny app is running) — the LLM doesn’t directly run the tool itself. Instead, it asks you to run the function and return the result.

Both ellmer and chatlas make it easy to define tools and register them with your chat client1, and they also handle the back-and-forth of tool calls by receiving requests from the LLM, executing the tool, and sending the results back. This means you can focus on what you do best: writing code to solve problems.

Any problem you can solve with a function can become a tool for an LLM! You can give the LLM access to live data, APIs, databases, or any other resources your app can reach.

btw: A complete toolkit for R

If you’re working in R, btw is a complete toolkit to help LLMs work better with R. Whether you’re copy-pasting to ChatGPT, chatting with an AI assistant in your IDE, or building LLM-powered apps with shinychat, btw makes it easy to give LLMs the context they need.

And, most importantly, btw provides a full suite of tools for gathering context from R sessions, including tools to: read help pages and vignettes, describe data frames, search for packages on CRAN, read web pages, and more.

Learn more at posit-dev.github.io/btw!

When the LLM decides to call a tool, shinychat displays the request and result in the chat interface. Users can see which tools are being invoked, what arguments are being passed, and what data is being returned. The tool display is designed to be customizable, so shinychat developers can customize the appearance and display of tool calls to best serve their users.

Basic tool display

Let’s start by creating a simple weather forecasting tool that fetches a weather data (in the United States) for a given latitude and longitude.

library(shinychat)
library(ellmer)
library(weathR)

get_weather_forecast <- tool(
  function(lat, lon) {
    point_tomorrow(lat, lon, short = FALSE)
  },
  name = "get_weather_forecast",
  description = "Get the weather forecast for a location.",
  arguments = list(
    lat = type_number("Latitude"),
    lon = type_number("Longitude")
  )
)

# Register the tool with your chat client
chat <- ellmer::chat("openai/gpt-4.1-nano")
chat$register_tool(get_weather_forecast)
from chatlas import ChatOpenAI
import requests

def get_weather_forecast(lat: float, lon: float) -> dict:
    """Get the weather forecast for a location."""
    lat_lng = f"latitude={lat}&longitude={lon}"
    url = f"https://api.open-meteo.com/v1/forecast?{lat_lng}&current=temperature_2m,wind_speed_10m"
    response = requests.get(url)
    return response.json()["current"]

# Register the tool with your chat client
chat = ChatOpenAI(model="gpt-4.1-nano")
chat.register_tool(get_weather_forecast)

With this tool registered, when you ask a weather-related question, the LLM might decide to call the get_weather_forecast() tool to get the latest weather.

In a chat conversation in your R console with ellmer, this might look like the following.

chat$chat("Will I need an umbrella for my walk to the T?")
#> ◯ [tool call] get_weather_forecast(lat = 42.3515, lon = -71.0552)
#> ● #> [{"time":"2025-11-20 16:00:00 EST","temp":42,"dewpoint":0,"humidity":67,"p_rain":1,"wi…
#>
#> Based on the weather forecast, there is a chance of rain around 4 to 5 PM,
#> with mostly cloudy to partly sunny skies. It seems there might be some rain
#> during this time, so carrying an umbrella could be a good idea if you plan
#> to go out around that time. Otherwise, the weather looks relatively clear
#> in the evening.

Notice that I didn’t provide many context clues, but the model correctly guessed that I’m walking to the MBTA in Boston, MA and picked the latitude and longitude for Boston’s South Station.

In shinychat, when the LLM calls the tool, shinychat automatically displays the tool request in a collapsed card:

Expanding the card shows the arguments passed to the tool. When the tool completes, shinychat replaces the request with a card containing the result:

If the tool throws an error, the error is captured and the error message is shown to the LLM. Typically this happens when the model makes a mistake in calling the tool and often the error message is instructive.

shinychat updates the card to show the error message:

Setting up streaming

To enable tool UI in your apps, you need to ensure that tool requests and results are streamed to shinychat:

You don’t need to do anything if you’re using chat_app() or the chat module via chat_mod_ui() and chat_mod_server(); tool UI is enabled automatically.

If you’re using chat_ui() with chat_append(), set stream = "content" when calling $stream_async():

server <- function(input, output, session) {
  client <- ellmer::chat("openai/gpt-4.1-nano")
  client$register_tool(get_weather_forecast)

  observeEvent(input$chat_user_input, {
    stream <- client$stream_async(input$chat_user_input, stream = "content")
    chat_append("chat", stream)
  })
}

In Python with Shiny Express, use content="all" when calling stream_async():

app.py
from chatlas import ChatOpenAI
from shiny.express import ui
from shinychat.express import Chat

client = ChatOpenAI(model="gpt-4.1-nano")
client.register_tool(get_weather_forecast)

chat = Chat(id="chat")
chat.ui()

@chat.on_user_submit
async def handle_user_input(user_input: str):
    response = await client.stream_async(user_input, content="all")
    await chat.append_message_stream(response)

For Shiny Core mode:

app.py
from chatlas import ChatOpenAI
from shiny import App, ui
from shinychat import Chat

client = ChatOpenAI(model="gpt-4.1-nano")
client.register_tool(get_weather_forecast)

app_ui = ui.page_fluid(
    Chat(id="chat").ui()
)

def server(input, output, session):
    chat = Chat(id="chat")

    @chat.on_user_submit
    async def handle_user_input(user_input: str):
        response = await client.stream_async(user_input, content="all")
        await chat.append_message_stream(response)

app = App(app_ui, server)

Customizing tool title and icon

You can enhance the visual presentation of tool requests and results by adding custom titles and icons to your tools. This helps users quickly identify which tools are being called.

Use tool_annotations() to add a title and icon:

get_weather_forecast <- tool(
  function(lat, lon) {
    point_tomorrow(lat, lon, short = FALSE)
  },
  name = "get_weather_forecast",
  description = "Get the weather forecast for a location.",
  arguments = list(
    lat = type_number("Latitude"),
    lon = type_number("Longitude")
  ),
  annotations = tool_annotations(
    title = "Weather Forecast",
    icon = bsicons::bs_icon("cloud-sun")
  )
)

With chatlas, you can customize the tool display in two ways:

  1. Use the ._display attribute to customize the tool display:

    import faicons
    
    def get_weather_forecast(lat: float, lon: float) -> dict:
        """Get the weather forecast for a location."""
        # ... implementation ...
    
    get_weather_forecast._display = {
        "title": "Weather Forecast",
        "icon": faicons.icon_svg("cloud-sun")
    }

    This approach sets the title and icon for all calls to this tool, so it’s ideal for predefined tools or tools that are bundled in a Python module or package.

  2. Set the tool annotations at registration time:

    chat.register_tool(
        get_weather_forecast,
        annotations={
            "title": "Weather Forecast",
            "icon": faicons.icon_svg("cloud-sun")
        }
    )

    This approach allows you to customize the display for a specific chat client or application without modifying the tool function itself.

Now the tool card shows your custom title and icon:

Custom display content

By default, shinychat shows the raw tool result value as a code block. But often you’ll want to present data to users in a more polished format—like a formatted table or a summary.

You can customize the display by returning alternative content:

Return a ContentToolResult with extra$display containing alternative content:

get_weather_forecast <- tool(
  function(lat, lon, location_name) {
    forecast_data <- point_tomorrow(lat, lon, short = FALSE)
    forecast_table <- gt::as_raw_html(gt::gt(forecast_data))

    ContentToolResult(
      forecast_data,  # This is what the LLM sees
      extra = list(
        display = list(
          html = forecast_table,  # This is what users see
          title = paste("Weather Forecast for", location_name)
        )
      )
    )
  },
  name = "get_weather_forecast",
  description = "Get the weather forecast for a location.",
  arguments = list(
    lat = type_number("Latitude"),
    lon = type_number("Longitude"),
    location_name = type_string("Name of the location")
  ),
  annotations = tool_annotations(
    title = "Weather Forecast",
    icon = bsicons::bs_icon("cloud-sun")
  )
)

Return a ToolResult with display options:

from chatlas import ToolResult
import pandas as pd

def get_weather_forecast(lat: float, lon: float, location_name: str):
    """Get the weather forecast for a location."""
    # Get forecast data
    data = fetch_weather_data(lat, lon)

    # Create a DataFrame for the LLM
    forecast_df = pd.DataFrame(data)

    # Create HTML table for users
    forecast_table = forecast_df.to_html(index=False)

    return ToolResult(
        value=forecast_df.to_dict(),  # LLM sees this
        display={
            "html": forecast_table,  # Users see this
            "title": f"Weather Forecast for {location_name}"
        }
    )

The display options support three content types (in order of preference):

  1. html: HTML content from packages like {gt}, {reactable}, or {htmlwidgets} (R), or Pandas/HTML strings (Python)
  2. markdown: Markdown text that’s automatically rendered
  3. text: Plain text without code formatting

Here’s what a formatted table looks like in the tool result:

Additional display options

You can control how tool results are presented using additional display options:

  • show_request = FALSE: Hide the tool call details when they’re obvious from the display
  • open = TRUE: Expand the result panel by default (useful for rich content like maps or charts)
  • title and icon: Override the tool’s default title and icon for this specific result

Another helpful feature is to include an _intent argument in your tool definition. When present in the tool arguments, shinychat shows the _intent value in the tool card header, helping users understand why the LLM is calling the tool.

tool_with_intent <- tool(
  function(`_intent`) {
    runif(1)
  },
  name = "random_number",
  description = "Generate a random number.",
  arguments = list(
    `_intent` = type_string(
      "Explain why you're generating this number"
    )
  )
)
def random_number(_intent: str) -> float:
    """Generate a random number.

    Args:
        _intent: Explain why you're generating this number
    """
    import random
    return random.random()

Notice that the tool function itself doesn’t actually use the _intent argument, but its presence allows shinychat to give the user additional context about the tool call.

Bookmarking support

When a Shiny app reloads, the app returns to its initial state, unless the URL includes bookmarked state.2 Automatically updating the URL to include a bookmark of the chat state is a great way to help users return to their work if they accidentally refresh the page or unexpectedly lose their connection.

Both shinychat for R and Python provide helper functions that make it easy to restore conversations with bookmarks. This means users can refresh the page or share a URL and pick up right where they left off.

In R, the chat_restore() function restores the message history from the bookmark when the app starts up and ensures that the chat client state is automatically bookmarked on user input and assistant responses.

library(shiny)
library(shinychat)

ui <- function(request) {
  page_fillable(
    chat_ui("chat")
  )
}

server <- function(input, output, session) {
  chat_client <- ellmer::chat_openai(model = "gpt-4o-mini")

  # Automatically save chat state on user input and responses
  chat_restore("chat", chat_client)

  observeEvent(input$chat_user_input, {
    stream <- chat_client$stream_async(input$chat_user_input)
    chat_append("chat", stream)
  })
}

# Enable URL-based bookmarking
shinyApp(ui, server, enableBookmarking = "url")

enableBookmarking = "url" stores the chat state in encoded data in the query string of the app’s URL. Because browsers have native limitations on the size of a URL, you should use enableBookmarking = "server" to store state server-side without URL size limitations for chatbots expected to have large conversation histories.

And if you’re using chat_app() for quick prototypes, bookmarking is already enabled automatically.

In Python, the .enable_bookmarking() method handles the where, when, and how of bookmarking chat state.

Express mode

from chatlas import ChatOllama
from shiny.express import ui

chat_client = ChatOllama(model="llama3.2")

chat = ui.Chat(id="chat")
chat.ui(messages=["Welcome!"])

chat.enable_bookmarking(
    chat_client,
    bookmark_store="url", # or "server"
    bookmark_on="response", # or None
)

Core mode

from chatlas import ChatOllama
from shiny import ui, App

app_ui = ui.page_fixed(
    ui.chat_ui(id="chat", messages=["Welcome!"])
)

def server(input):
    chat_client = ChatOllama(model="llama3.2")
    chat = ui.Chat(id="chat")

    chat.enable_bookmarking(
        chat_client,
        bookmark_on="response", # or None
    )

app = App(app_ui, server, bookmark_store="url")

Configuration options

The .enable_bookmarking() method handles three aspects of bookmarking:

  1. Where (bookmark_store)
    • "url": Store the state in the URL.
    • "server": Store the state on the server. Consider this over "url" if you want to support a large amount of state, or have other bookmark state that can’t be serialized to JSON.
  2. When (bookmark_on)
    • "response": Triggers a bookmark when an "assistant" response is appended.
    • None: Don’t trigger a bookmark automatically. This assumes you’ll be triggering bookmarks through other means (e.g., a button).
  3. How is handled automatically by registering the relevant on_bookmark and on_restore callbacks.

When .enable_bookmarking() triggers a bookmark for you, it’ll also update the URL query string to include the bookmark state. This way, when the user unexpectedly loses connection, they can load the current URL to restore the chat state, or go back to the original URL to start over.

Other improvements in shinychat for R

Beyond tool calling UI and bookmarking support, shinychat for R v0.3.0 includes several other enhancements.

Better programmatic control

chat_mod_server() now returns a set of reactive values and functions for controlling the chat interface:

server <- function(input, output, session) {
  chat <- chat_mod_server("chat", ellmer::chat_openai())

  # React to user input
  observe({
    req(chat$last_input())
    print(paste("User said:", chat$last_input()))
  })

  # React to assistant responses
  observe({
    req(chat$last_turn())
    print("Assistant completed response")
  })

  # Programmatically control the chat
  observeEvent(input$suggest_question, {
    chat$update_user_input(
      value = "What's the weather like today?",
      submit = TRUE  # Automatically submit
    )
  })

  observeEvent(input$reset, {
    chat$clear()  # Clear history and UI
  })
}

The returned list includes:

  • last_input and last_turn reactives for monitoring chat state
  • update_user_input() for programmatically setting or submitting user input—great for suggested prompts or guided conversations
  • append() for adding messages to the chat UI
  • clear() for resetting the chat, with options to control how the client history is handled
  • client for direct access to the ellmer chat client

There’s also a standalone update_chat_user_input() function if you’re using chat_ui() directly, which supports updating the placeholder text and moving focus to the input.

Custom assistant icons

You can now customize the icon shown next to assistant messages to better match your application’s branding or to distinguish between different assistants:

library(bsicons)

# Set a custom icon for a specific response
chat_append(
  "chat",
  "Here's some helpful information!",
  icon = bs_icon("lightbulb")
)

# Or set a default icon for all assistant messages
chat_ui("chat", icon_assistant = bs_icon("robot"))

This is especially useful when building multi-agent applications where different assistants might have different personalities or roles.

Learn more

The tool calling UI opens up exciting possibilities for building transparent, user-friendly AI applications. Whether you’re fetching data, running calculations, or integrating with external services, users can now see exactly what’s happening.

To dive deeper:

Acknowledgements

A huge thank you to everyone who contributed to this release with bug reports, feature requests, and code contributions:

@bianchenhao, @cboettig, @chendaniely, @cpsievert, @DavZim, @DeepanshKhurana, @DivadNojnarg, @gadenbuie, @iainwallacebms, @janlimbeck, @jcheng5, @jimrothstein, @karangattu, @ManuelSpinola, @MohoWu, @nissinbo, @noamanemobidata, @parmsam, @PaulC91, @rkennedy01, @schloerke, @selesnow, @simonpcouch, @skaltman, @stefanlinner, @t-kalinowski, @thendrix-trlm, @wch, @wlandau, and @Yousuf28.

Footnotes

  1. See the ellmer tool calling documentation for R and the chatlas tool calling documentation for Python for more details on defining and registering tools.↩︎

  2. This can be especially frustrating behavior since hosted apps, by default, will close an idle session after a certain (configurable) amount of time.↩︎