| Title: | LLM-Powered Code Generation, Error Fixing, and Chat for 'RStudio' |
| Version: | 1.2.0 |
| Maintainer: | Shiyang Zheng <shiyang.zheng@nottingham.ac.uk> |
| Author: | Shiyang Zheng |
| Description: | An 'RStudio' addin that integrates large language model (LLM) assistance directly into the code-editing workflow. Features include: (1) generate R code from inline comments; (2) obtain LLM-assisted fixes for console errors; (3) insert plain-English explanations of selected code blocks; (4) a multi-turn Chat Panel with session-context awareness (loaded packages, global objects, source editor contents, console history). Supports 'OpenAI', 'Anthropic' (Claude), 'DeepSeek', 'Groq', 'Together AI', 'OpenRouter', 'Ollama' (fully local, no API key required), and any 'OpenAI'-compatible custom endpoint (e.g. 'LM Studio', 'vLLM', 'llama.cpp'). |
| License: | MIT + file LICENSE |
| Depends: | R (≥ 4.1.0) |
| Encoding: | UTF-8 |
| RoxygenNote: | 7.3.2 |
| URL: | https://github.com/ShiyangZheng/llmcoder |
| BugReports: | https://github.com/ShiyangZheng/llmcoder/issues |
| Imports: | rstudioapi (≥ 0.13), httr2 (≥ 1.0.0), miniUI (≥ 0.1.1), shiny (≥ 1.7.0), stringi (≥ 1.7.0), stringr (≥ 1.5.0), rlang (≥ 1.0.0), htmltools (≥ 0.5.0), jsonlite (≥ 1.8.0) |
| Suggests: | testthat (≥ 3.0.0), withr (≥ 2.5.0) |
| NeedsCompilation: | no |
| Packaged: | 2026-05-02 20:42:57 UTC; admin |
| Repository: | CRAN |
| Date/Publication: | 2026-05-05 14:06:13 UTC |
Collect the most recent R error message using multiple strategies
Description
Tries, in order: rlang::last_error(), .Last.error (base R), .Last.error condition message. Returns NULL if nothing is found.
Usage
.collect_last_error()
Console history from rstudioapi with ~/.Rhistory fallback
Description
Console history from rstudioapi with ~/.Rhistory fallback
Usage
.get_console_history(max_hist = 30L)
Escape a character string for safe embedding in a JS string literal
Description
Replaces characters that break JS string literals: backslash, double-quote, newline, carriage-return, and tab.
Usage
.js_esc(x)
Arguments
x |
Character vector. |
Value
Character vector with escapes applied.
Open the LLMcoder Chat Panel
Description
Launches an interactive multi-turn chat powered by your configured LLM directly inside RStudio. The panel supports:
Multi-turn conversation history.
Session Context awareness (loaded packages, global objects, source editor contents, console history).
One-click code execution: each code block (marked
```r) includes a Run button that sends the code to the R console.Prompt style presets: General, R Code Helper, Statistics Advisor, Research (Psycho).
Session context toggle.
Transcript export.
Usage
addin_chat_panel(system_prompt_override = NULL)
chat_gadget(system_prompt_override = NULL)
Arguments
system_prompt_override |
Override the auto-generated system prompt.
Use with caution; this replaces the default persona entirely.
Pass |
Details
Bind a keyboard shortcut in RStudio (Tools -> Modify Keyboard
Shortcuts...) to addin_chat_panel() for quick access
(Ctrl+Shift+L is a common choice).
Value
Invisible NULL (launches a Shiny gadget and does not
return until the gadget is closed).
See Also
llmcoder_setup(), addin_generate_from_comment()
Examples
## Not run:
# Open the chat panel (assigns Ctrl+Shift+L in RStudio shortcuts)
addin_chat_panel()
# Start with a custom system prompt
addin_chat_panel(system_prompt_override = "You are a strict R reviewer.")
## End(Not run)
Explain selected R code as inline comments
Description
Select a block of R code in the editor, then trigger this addin
(recommended shortcut: Ctrl+Shift+E / Cmd+Shift+E). An explanation is
inserted as # comment lines immediately above the selected code block.
Usage
addin_explain_code()
Details
The LLM receives the selected code and is instructed to produce a concise,
human-readable explanation — focusing on what the code does and why,
not on basic R syntax. Every output line is prefixed with # so the
explanation is valid R that can be left in the source file.
Value
Invisible NULL (called for side-effects).
See Also
addin_generate_from_comment(), llmcoder_setup()
Fix the last console error automatically
Description
After running code that produces an error in the R console, trigger this
addin (recommended shortcut: Ctrl+Shift+F / Cmd+Shift+F).
Usage
addin_fix_console_error()
Details
The addin attempts to recover the most recent error message using several strategies, in order of priority:
The
rlanglast-error store (rlang::last_error()), which captures errors thrown by rlang-aware packages and the tidyverse.Base R's
.Last.errorbinding (set whenever an unhandled condition reaches the top level).The
.Last.error.tracecharacter vector written by some versions of rlang.
The complete source file currently open in the editor is also sent to the LLM
as context. The LLM returns the entire corrected file, with changed lines
annotated as # FIX: <reason>. A diff-style preview dialog lets you review
and edit the fix before applying it.
Workflow
Run code — error appears in console.
Trigger this addin.
Review the fix in the preview dialog → click Apply Fix.
If no recent error is detected, a dialog explains the possible reasons and
suggests using addin_fix_selected_error() instead.
Value
Invisible NULL (called for side-effects).
See Also
addin_fix_selected_error(), llmcoder_setup()
Fix an error by selecting its text
Description
Select the error message text in the editor (or paste it into a temporary comment), then trigger this addin. The addin pairs the selected text with the complete source file currently open in the editor and asks the LLM for a fix, displaying the result in a review dialog.
Usage
addin_fix_selected_error()
Details
This addin is the recommended fallback when addin_fix_console_error() does
not detect an error automatically (e.g., because the error occurred inside a
tryCatch() block or in a separate R process).
Workflow
Copy the error message from the console.
Paste it anywhere in the source file, or simply select it in the console output if your terminal supports that.
Select the error text in the editor.
Trigger this addin.
Review and apply the suggested fix.
Value
Invisible NULL (called for side-effects).
See Also
addin_fix_console_error(), llmcoder_setup()
Generate R code from a comment (silent insert)
Description
Places the cursor on a line beginning with #, then triggers this addin
(default shortcut: Ctrl+Shift+G on Windows/Linux, Cmd+Shift+G on macOS).
The LLM reads the comment text and the surrounding code context, then inserts
the generated R code on the line immediately below the comment.
Usage
addin_generate_from_comment()
Details
The addin extracts the text of the comment at the cursor position and up to
getOption("llmcoder.context_lines", 40L) lines of preceding code as
context. The provider, model, and API key are taken from options set by
llmcoder_setup() or the LLMcoder Settings addin.
No dialog is shown; code is inserted immediately. Use
addin_generate_with_preview() if you prefer to review the output first.
Value
Invisible NULL (called for side-effects).
See Also
addin_generate_with_preview(), llmcoder_setup()
Generate R code with an editable preview dialog
Description
Same as addin_generate_from_comment() but opens a Shiny gadget so you can
review and optionally edit the generated code before it is inserted into the
editor. Recommended shortcut: Ctrl+Shift+P / Cmd+Shift+P.
Usage
addin_generate_with_preview()
Details
The preview dialog shows the generated code in an editable text area. Click Insert to place it in the editor, or close the dialog to discard the result.
Value
Invisible NULL (called for side-effects).
See Also
addin_generate_from_comment(), llmcoder_setup()
Open the LLMcoder settings dialog
Description
Launches an interactive Shiny gadget that lets you configure the LLM
provider, model, API key, Ollama URL (for local models), custom base URL
(for LM Studio / vLLM / llama.cpp), and context-window size. Settings can
optionally be persisted to ~/.Rprofile so they survive R restarts.
Usage
addin_settings()
Value
Invisible NULL (called for side-effects).
See Also
llmcoder_setup(), llmcoder_config()
System prompt for code explanation
Description
Returns the system prompt instructing the LLM to write R comments explaining the user's selected code.
Usage
build_explain_prompt()
Value
Character string: the system prompt sent to the LLM for the explain workflow.
Examples
## Not run:
build_explain_prompt()
## End(Not run)
System prompt for error fixing
Description
Returns the system prompt instructing the LLM to diagnose an R error and produce corrected code.
Usage
build_fix_prompt()
Value
Character string: the system prompt sent to the LLM for the fix workflow.
Examples
## Not run:
build_fix_prompt()
## End(Not run)
System prompt for code generation
Description
System prompt for code generation
Usage
build_system_prompt()
Anthropic Messages API call
Description
Anthropic Messages API call
Usage
call_anthropic(prompt, system_prompt, api_key, model)
Anthropic Messages API with multi-turn history
Description
Extracts the system role from messages and moves it to the top-level
system field, as required by the Anthropic API. Only
user and assistant roles are allowed.
Usage
call_anthropic_history(messages, api_key, model)
Arguments
messages |
List of message objects. |
api_key |
API key. |
model |
Model name. |
Call the configured LLM
Description
Unified dispatch function that reads provider, model, and credentials from
options (set via llmcoder_setup() or Addins \→ LLMcoder Settings) and
forwards the request to the appropriate backend.
Usage
call_llm(prompt, system_prompt, context = NULL)
Arguments
prompt |
Character. The user-facing instruction. |
system_prompt |
Character. The system-level instruction for the model. |
context |
Character or |
Details
Supported providers:
"openai"OpenAI Chat Completions API (
https://api.openai.com/v1)."anthropic"Anthropic Messages API (
https://api.anthropic.com/v1/messages)."deepseek"DeepSeek Chat API, OpenAI-compatible (
https://api.deepseek.com/v1)."ollama"Local Ollama server (default
http://localhost:11434). No API key required."groq"Groq Cloud API, OpenAI-compatible (
https://api.groq.com/openai/v1). Extremely fast inference."together"Together AI API, OpenAI-compatible (
https://api.together.xyz/v1). Wide open-source model selection."openrouter"OpenRouter API, OpenAI-compatible (
https://openrouter.ai/api/v1). Unified gateway to 100+ models."custom"Any OpenAI-compatible server. Set
llmcoder.custom_urlto the base URL (e.g.\"http://localhost:1234/v1"for LM Studio).
Value
Character string containing the model's response text.
Examples
## Not run:
llmcoder_setup("ollama", model = "llama3")
resp <- call_llm(
prompt = "Write R code to compute the mean of a numeric vector",
system_prompt = "You are an R programming assistant.",
context = NULL
)
cat(resp)
## End(Not run)
Call the configured LLM with full message history
Description
Like call_llm() but accepts a messages list that preserves the full
conversation context across multiple turns. This is the engine behind
addin_chat_panel().
Usage
call_llm_history(messages, system_prompt_override = NULL)
Arguments
messages |
List of messages, each element being
|
system_prompt_override |
Optional character string. If supplied,
it replaces the system role message in |
Value
Character string: the model's response text.
Ollama local API (uses the OpenAI-compatible /v1 endpoint, Ollama >= 0.1.24)
Description
No API key is required. The Ollama server must be running locally;
start it with ollama serve in a terminal.
Usage
call_ollama(prompt, system_prompt, model, base_url)
Ollama multi-turn (OpenAI-compatible endpoint)
Description
Ollama multi-turn (OpenAI-compatible endpoint)
Usage
call_ollama_history(messages, model, base_url)
Generic OpenAI-compatible chat completions call
Description
Generic OpenAI-compatible chat completions call
Usage
call_openai_compat(
prompt,
system_prompt,
api_key,
model,
base_url,
extra_hdrs = character()
)
OpenAI-compatible multi-turn call
Description
OpenAI-compatible multi-turn call
Usage
call_openai_compat_history(
messages,
api_key,
model,
base_url,
extra_hdrs = character()
)
Arguments
messages |
List of message objects. |
api_key |
API key. |
model |
Model name. |
base_url |
API base URL. |
extra_hdrs |
Named character vector of extra headers. |
Register Shiny custom message handlers for the Chat Panel
Description
Registers R-side handlers that respond to JavaScript events dispatched by the Chat Panel UI (code-run button clicks, etc.).
Usage
chat_js_handlers(session)
Arguments
session |
Shiny session object. |
Strip markdown code fences from LLM output
Description
Strips common markdown code fences from LLM output so the raw code can be inserted into the editor.
Usage
clean_code_output(code)
Arguments
code |
Character string returned by an LLM, possibly wrapped in
|
Value
Character string with fences removed. If no fences are found, the input is returned as-is.
Examples
## Not run:
raw <- "\n```r\nx <- mean(1:10)\nprint(x)\n```\n"
clean_code_output(raw)
clean_code_output("no fences here")
## End(Not run)
Default model name per provider
Description
Returns a sensible default model name when the user has not specified one explicitly.
Usage
default_model(provider)
Arguments
provider |
Character. Provider identifier (see |
Value
Character string with the default model name.
Examples
## Not run:
default_model("openai")
default_model("anthropic")
default_model("ollama")
## End(Not run)
Very simple markdown-to-HTML renderer for chat messages
Description
Handles fenced code blocks (```r ... ```), inline code, bold, italic,
headings, and unordered lists. No external dependencies required.
Usage
escape_html(s)
Arguments
s |
Character string. Raw markdown text. |
Value
Character string. Sanitised HTML fragment.
Extract the comment text at the current cursor position
Description
Reads the line at the cursor position from the active editor context and
returns its components. Throws an informative error if the cursor is not
positioned on a comment line (i.e., a line starting with #, possibly
preceded by whitespace).
Usage
extract_comment_at_cursor(ctx)
Arguments
ctx |
An |
Value
A named list with four components:
commentCharacter. The comment text with the leading
#character(s) and optional space stripped.rowInteger. 1-based row index of the comment line in the document.
full_lineCharacter. The raw full-line text as it appears in the editor.
indentCharacter. The leading whitespace of the line (used to preserve indentation when inserting generated code).
Shared CSS for all gadgets
Description
Shared CSS for all gadgets
Usage
gadget_css()
Collect N lines of surrounding code above the cursor
Description
Returns a character string containing the n lines of source code
immediately above the comment line, joined by newlines. This is sent to the
LLM as context so that it can infer variable names, existing code style, and
already-loaded packages.
Usage
gather_context(ctx, row, n = 30)
Arguments
ctx |
An |
row |
Integer. 1-based row of the comment line (context is taken from
rows |
n |
Integer. Maximum number of context lines (default |
Value
A single character string (may be "" if row == 1).
Get the active source editor context
Description
Wraps rstudioapi::getSourceEditorContext() with a check that RStudio is
available. Called by all addin entry points before any other operation.
Usage
get_editor_ctx()
Value
An rstudio_editor_context object (a list returned by
rstudioapi::getSourceEditorContext()).
Insert text immediately after a given row in the editor
Description
Inserts one or more lines of text at the beginning of the row that follows
row in the currently active source editor. Each line is prepended with
indent to match the indentation level of the originating comment.
Usage
insert_after_row(text, row, indent = "")
Arguments
text |
Character. Code to insert; may contain newlines. |
row |
Integer. 1-based row after which the text is inserted. |
indent |
Character. Leading whitespace prepended to every inserted
line (default |
Value
Invisible NULL (called for side-effects).
Show the current LLMcoder configuration
Description
Returns (and prints) the active provider, model, API key (masked), context-lines setting, and any provider-specific URLs.
Usage
llmcoder_config()
Value
An object of class "llmcoder_config": a named list with elements
provider, model, api_key, context_lines, ollama_url, and
custom_url. The API key is masked for security. When printed, it
displays in a human-readable table.
See Also
Examples
# Show current configuration (reads from option values)
llmcoder_config()
# Capture the config as a list for programmatic use
cfg <- llmcoder_config()
cfg$provider
cfg$model
Configure LLMcoder for the current session
Description
Sets the LLM provider, API key, model, and related options for the current R
session. For permanent configuration that survives restarts, use
Addins > LLMcoder Settings, which writes to ~/.Rprofile.
Usage
llmcoder_setup(
provider = c("openai", "anthropic", "deepseek", "ollama", "groq", "together",
"openrouter", "custom"),
api_key = NULL,
model = NULL,
context_lines = 40L,
ollama_url = "http://localhost:11434",
custom_url = ""
)
Arguments
provider |
Character. One of |
api_key |
Character. Your API key. Not required when
|
model |
Character. Model identifier. If |
context_lines |
Integer. Number of lines of code above the cursor that
are sent as context to the LLM (default |
ollama_url |
Character. Base URL of the Ollama server (default
|
custom_url |
Character. Base URL of a custom OpenAI-compatible server
(e.g. |
Details
Provider defaults:
| Provider | Default model | Notes |
openai | gpt-4o-mini | Fast, cost-effective |
anthropic | claude-sonnet-4-20250514 | Strongest reasoning |
deepseek | deepseek-chat | Very cheap, great code quality |
ollama | llama3 | No API key, fully local |
groq | llama-3.3-70b-versatile | Extremely fast inference |
together | meta-llama/Llama-3-70b-chat-hf | Large open-source model choice |
openrouter | openai/gpt-4o-mini | Unified gateway for 100+ models |
custom | "" (must specify) | Any OpenAI-compat endpoint |
Value
Invisible NULL.
See Also
llmcoder_config(), addin_settings()
Examples
## Not run:
# OpenAI
llmcoder_setup("openai", api_key = Sys.getenv("OPENAI_API_KEY"))
llmcoder_setup("openai", api_key = Sys.getenv("OPENAI_API_KEY"), model = "gpt-4o")
# Anthropic Claude
llmcoder_setup("anthropic", api_key = Sys.getenv("ANTHROPIC_API_KEY"))
# DeepSeek (cheapest, excellent code quality)
llmcoder_setup("deepseek", api_key = Sys.getenv("DEEPSEEK_API_KEY"))
# Ollama — fully local, no API key needed
llmcoder_setup("ollama", model = "qwen2.5-coder:7b")
llmcoder_setup("ollama", model = "codellama:13b",
ollama_url = "http://192.168.1.10:11434") # remote server
# Groq — extremely fast inference on open models
llmcoder_setup("groq",
api_key = Sys.getenv("GROQ_API_KEY"),
model = "llama-3.3-70b-versatile")
# Together AI — wide open-source model selection
llmcoder_setup("together",
api_key = Sys.getenv("TOGETHER_API_KEY"),
model = "mistralai/Mixtral-8x7B-Instruct-v0.1")
# OpenRouter — unified gateway, supports 100+ models
llmcoder_setup("openrouter",
api_key = Sys.getenv("OPENROUTER_API_KEY"),
model = "anthropic/claude-3.5-sonnet")
# LM Studio or any OpenAI-compatible local server
llmcoder_setup("custom",
api_key = "lm-studio",
model = "local-model",
custom_url = "http://localhost:1234/v1")
# Reduce context window to save tokens
llmcoder_setup("openai",
api_key = Sys.getenv("OPENAI_API_KEY"),
context_lines = 20L)
## End(Not run)
Emit a status message to the R console
Description
Prefixes the message with [llmcoder] so users can distinguish addin
output from their own code output.
Usage
notify(msg)
Arguments
msg |
Character. The message text. |
Value
Invisible NULL.
List models available on a running Ollama server
Description
Queries GET /api/tags on the local Ollama REST API and returns the names
of all installed models. Useful for populating the model selector in the
Settings gadget.
Usage
ollama_list_models(
base_url = getOption("llmcoder.ollama_url", "http://localhost:11434")
)
Arguments
base_url |
Character. Ollama base URL. Defaults to the value of
|
Details
Ollama must be running (ollama serve) before calling this function.
Models are installed with ollama pull <model> from the terminal.
Value
Character vector of model tag names, or NULL if Ollama is not
reachable.
Examples
## Not run:
ollama_list_models()
# [1] "llama3:latest" "qwen2.5-coder:7b" "mistral:latest"
## End(Not run)
Safely call the LLM, catching API errors
Description
Safely call the LLM, catching API errors
Usage
safe_call_llm(prompt, system_prompt, context)
Safely call the LLM with full message history
Description
Safely call the LLM with full message history
Usage
safe_call_llm_history(messages, system_prompt_override = NULL)
Arguments
messages |
List of messages, each |
system_prompt_override |
Override system prompt (optional). |
Value
Either a character string (response text) or
list(error = message) on failure.
Safely obtain the active editor context
Description
Safely obtain the active editor context
Usage
safe_get_ctx()
Build a session-context system-prompt block
Description
Convenience wrapper around session_context_report() that wraps the report
in a descriptive header so the LLM can distinguish it from user content.
Usage
session_context_prompt(...)
Arguments
... |
Passed to |
Value
Character string suitable for prepending to a system prompt.
Examples
## Not run:
ctx_prompt <- session_context_prompt()
## End(Not run)
Capture a human-readable report of the current R session state
Description
session_context_report() collects and formats the following information
from the current R session:
R version and operating system.
Loaded add-on packages (non-base).
Global environment objects grouped by class.
Contents of the active source editor (via rstudioapi).
Console command history (via rstudioapi; falls back to
~/.Rhistory).
This report is primarily used internally to populate the system prompt
sent to the LLM in the addin_chat_panel() gadget, so the model has
full awareness of the analyst's working environment.
Usage
session_context_report(max_objs = 20L, max_hist = 30L, quiet = FALSE)
Arguments
max_objs |
Maximum number of global objects to list per class group (default 20). |
max_hist |
Maximum number of console history lines to include (default 30). |
quiet |
If |
Value
Character string. A multi-section report ready to embed in a system prompt.
Examples
## Not run:
report <- session_context_report()
cat(report)
## End(Not run)
Write llmcoder options to ~/.Rprofile
Description
Writes (or replaces) an # --- llmcoder --- block in the user's
~/.Rprofile so that llmcoder settings persist across R sessions.
Usage
write_rprofile(provider, model, api_key, ctx_lines, ollama_url, custom_url)
Arguments
provider |
Character. Provider identifier (see |
model |
Character. Model name. |
api_key |
Character. API key (may be |
ctx_lines |
Integer. Number of context lines. |
ollama_url |
Character. Ollama base URL. |
custom_url |
Character. Custom endpoint base URL. |
Value
Invisible NULL. Called for its side-effect of writing to
~/.Rprofile.
Examples
## Not run:
write_rprofile(
provider = "ollama",
model = "llama3",
api_key = "",
ctx_lines = 40L,
ollama_url = "http://localhost:11434",
custom_url = ""
)
## End(Not run)