Back to Templates

Hello Streaming Markdown

View App View Code on GitHub
shiny create --template basic-markdown-stream --mode express --github posit-dev/py-shiny-templates/gen-ai
shiny create --template basic-markdown-stream --mode core --github posit-dev/py-shiny-templates/gen-ai

A basic example of collecting user input, using it to fill a LLM prompt template, then sending the result to the LLM for response generation. The response is then streamed back to the user in real-time via MarkdownStream().

To learn more, see the article on Gen AI streaming.

Other model providers

This particular template uses chatlas.ChatAnthropic() to do response generation via Anthropic. With chatlas, it’s easy to switch to other providers. Just change ChatAnthropic() to another provider (e.g., ChatOpenAI()).

Layouts:

Packages:

chatlas