Claude Code Voice Mode: Complete Guide to Voice-Driven Development in 2026
**Claude Code Voice Mode transforms terminal-based AI coding into a hands-free, speech-driven workflow.** Launched as a native `/voice` command rolling out in March 2026, it lets developers speak instructions, describe architecture, dictate code, and control Claude Code entirely through voice — dire

Claude Code Voice Mode: Complete Guide to Voice-Driven Development in 2026
TL;DR
Claude Code Voice Mode transforms terminal-based AI coding into a hands-free, speech-driven workflow. Launched as a native /voice command rolling out in March 2026, it lets developers speak instructions, describe architecture, dictate code, and control Claude Code entirely through voice — directly inside the terminal.
Two paths exist for voice interaction:
- Native
/voicecommand — Built into Claude Code. Push-to-talk with spacebar. Supports 20 languages. Zero configuration required. - VoiceMode MCP server — Community-built two-way voice bridge. Claude speaks back. Supports continuous conversation mode and custom wake words.
Voice Mode is not a gimmick. It handles architecture discussions, debugging walkthroughs, code review narration, and accessibility workflows where keyboard-heavy CLI interaction creates friction. Developers with RSI, visual impairments, or mobility constraints gain a fully functional alternative input method for agentic coding.
Table of Contents
- What Is Claude Code Voice Mode?
- Key Features
- How to Use Voice Mode (Step-by-Step)
- Native Voice vs. VoiceMode MCP: When to Use Each
- Best Use Cases for Voice-Driven Development
- Limitations and Workarounds
- Pros and Cons
- Frequently Asked Questions
What Is Claude Code Voice Mode?
Claude Code is Anthropic's agentic command-line tool for AI-assisted development. It reads codebases, writes files, runs commands, manages Git workflows, and operates across entire projects with full context awareness. Until March 2026, every interaction happened through typed text in the terminal.
Voice Mode changes that. The /voice command, rolling out to Claude Code users in March 2026, adds push-to-talk speech input as a native feature. Developers press and hold the spacebar, speak their instruction, release, and Claude Code processes the transcribed speech exactly as it would process typed text. The spoken input flows through the same agentic pipeline — tool calls, file edits, bash commands, multi-step reasoning — with no degradation in capability.
The feature addresses a real friction point. Claude Code sessions frequently involve long, detailed instructions: "Refactor the authentication middleware to use JWT tokens instead of session cookies, update all route handlers that reference the session store, and add unit tests for the new token validation logic." Typing that takes 30 seconds. Speaking it takes 8. Over the course of a full development session with dozens of such instructions, the time savings compound.
Anthropic built the speech recognition on their existing multimodal infrastructure. The transcription runs server-side, supports 20 languages at launch, and handles technical vocabulary — function names, library references, programming jargon — with notably higher accuracy than general-purpose dictation tools. Developers do not need to spell out useState or querySelector; the model infers the correct technical terms from context.
Voice Mode operates at the input layer only in the native implementation. Claude Code still responds with text output in the terminal. For developers who want two-way voice — Claude speaking its responses aloud — the community-built VoiceMode MCP server provides that capability through the Model Context Protocol. The two approaches serve different workflows, and both are covered in detail below.
The timing aligns with a broader industry shift. GitHub Copilot added voice input in late 2025. Cursor introduced whisper-to-edit in January 2026. Voice-driven development is no longer experimental — it is a production-ready input modality for professional coding tools. Claude Code's implementation stands out because it plugs directly into an agentic framework, meaning voice commands trigger multi-step autonomous workflows, not just single-line completions.
Key Features
1. Native /voice Command
The /voice command is built directly into Claude Code's CLI. No plugins, no extensions, no additional installations. Typing /voice in an active Claude Code session switches the input mode from text to speech. The terminal displays a visual indicator — a pulsing waveform — confirming that Voice Mode is active and waiting for input.
This native integration matters because voice input has full access to Claude Code's tool suite. Spoken instructions can trigger file reads, code edits, bash commands, Git operations, and multi-file refactors with the same authority as typed instructions. There is no sandboxed "voice-only" subset of capabilities. Every action available through the keyboard is available through speech.
The /voice command persists for the duration of the session. Developers can toggle between voice and text input at any point without restarting the session or losing context. Claude Code maintains full conversation history regardless of input method.
2. Push-to-Talk Mechanics
Voice Mode uses a push-to-talk model, not always-on listening. The spacebar serves as the talk key: press and hold to record, release to submit. This design eliminates accidental activations, background noise capture, and the privacy concerns that plague always-listening voice assistants.
The push-to-talk approach also gives developers precise control over utterance boundaries. Complex instructions can be delivered in a single held press, or broken into multiple sequential presses for iterative refinement. Claude Code processes each release as a complete input, similar to pressing Enter after typing a message.
Audio feedback confirms recording state — a subtle tone on press and release — so developers working with screen readers or in noisy environments always know when the microphone is active. The recording indicator in the terminal provides visual confirmation simultaneously.
3. Twenty-Language Support
At launch, Claude Code Voice Mode supports transcription in 20 languages: English, Spanish, French, German, Portuguese, Italian, Dutch, Russian, Chinese (Mandarin), Japanese, Korean, Arabic, Hindi, Turkish, Polish, Swedish, Norwegian, Danish, Finnish, and Czech.
Language detection is automatic. Developers do not need to pre-select a language or switch modes before speaking in a different language. The transcription model identifies the spoken language from the audio input and processes accordingly. For multilingual teams, this means a developer can speak an instruction in Japanese, review Claude's English response, then follow up in Spanish — all within the same session.
Technical vocabulary transcription accuracy remains high across all supported languages. The model handles code-specific terms (function names, library names, programming keywords) consistently regardless of the spoken language wrapping them. A developer speaking German can say "Erstelle eine neue React-Komponente mit useState und useEffect" and the transcription correctly preserves React, useState, and useEffect as technical terms.
4. Full Context Awareness
Voice input feeds into the same context window as typed input. Claude Code does not treat spoken instructions as isolated commands — they inherit the full session context, including the current codebase understanding, previous conversation turns, active file states, and any skills or hooks configured for the project.
This means a developer can say "fix the bug we were just discussing" after a multi-turn text conversation, and Claude Code resolves the reference correctly. Voice inputs and text inputs are interleaved seamlessly in the conversation history. The model sees no distinction between the two input types after transcription.
Context awareness extends to the project's CLAUDE.md file, .claude/settings.json configuration, and any active MCP servers. If a project has custom instructions defining code style preferences, architectural patterns, or review checklists, those constraints apply equally to voice-initiated actions.
5. Code Dictation
Speaking code directly is the most technically demanding voice input scenario, and Claude Code handles it through intent inference rather than literal transcription. When a developer says "create a function called calculate total that takes an array of items and returns the sum of their prices," Claude Code generates syntactically correct code in the project's language — not a literal transcription of the English sentence.
This approach sidesteps the fundamental awkwardness of dictating punctuation. No one wants to say "open parenthesis items colon array of item close parenthesis colon number open curly brace." Instead, the developer describes what the code should do, and Claude Code writes it. The voice input acts as a high-level specification, processed through the same code generation engine that handles typed natural-language instructions.
For cases where exact code phrasing matters — specific variable names, particular API calls, exact string values — developers can spell out terms or use the phonetic clarification pattern: "the variable name is user dash I-D, lowercase." Claude Code's technical vocabulary model handles most cases without spelling, but the escape hatch exists when precision demands it.
6. Voice-to-Action Pipeline
Voice Mode is not a transcription layer sitting on top of a text interface. Spoken instructions flow through Claude Code's full agentic pipeline: intent recognition, tool selection, parameter extraction, execution, and response generation. A single spoken sentence can trigger a multi-step workflow.
Saying "find all files that import the deprecated auth module, update them to use the new authentication service, and run the test suite" initiates exactly the same sequence of tool calls — Grep, Read, Edit, Bash — that the equivalent typed instruction would produce. Claude Code decomposes the spoken instruction into discrete steps, executes them sequentially with its tool suite, and reports results in the terminal.
This pipeline integration is what distinguishes Claude Code Voice Mode from dictation tools like macOS Dictation or Windows Voice Typing. Those tools produce text. Claude Code Voice Mode produces actions. The spoken word is an instruction to an autonomous agent, not text for a document.
7. VoiceMode MCP Server (Two-Way Voice)
The VoiceMode MCP server is a community-built extension that adds bidirectional voice to Claude Code through the Model Context Protocol. Where native /voice provides speech input with text output, VoiceMode MCP provides speech input and speech output — Claude talks back.
The MCP server exposes several tools: listen (capture and transcribe speech), speak (convert Claude's text responses to audio), conversation_mode (continuous back-and-forth voice dialogue), and set_voice (choose from available TTS voices). It uses local speech recognition and text-to-speech engines, keeping audio processing on-device rather than routing through external APIs.
VoiceMode MCP supports custom wake words, voice activity detection for hands-free operation, and configurable silence thresholds that determine when an utterance ends. For developers who want a fully conversational coding experience — speaking to Claude and hearing Claude respond — this MCP server delivers it. The setup requires adding the server to the project's MCP configuration, covered in the step-by-step guide below.
8. Accessibility
Voice Mode represents the most significant accessibility improvement to Claude Code since its launch. Developers with repetitive strain injuries (RSI), carpal tunnel syndrome, motor impairments, or temporary injuries that limit keyboard use gain a fully functional alternative input method. Every capability of Claude Code is accessible through speech.
For developers with visual impairments who already use screen readers, Voice Mode adds a complementary input channel. Instead of typing commands that the screen reader then reads back for confirmation, developers can speak commands directly and hear the screen reader's output of Claude's text responses. The combination of voice input and screen reader output creates a fully audio-driven development workflow.
The 20-language support also serves as an accessibility feature for non-native English speakers. Developers who think and communicate more fluently in their native language can now issue instructions in that language, removing the cognitive overhead of translating thoughts into English before typing them. For a tool that processes natural language, accepting that language in the developer's strongest tongue meaningfully reduces friction.
How to Use Voice Mode (Step-by-Step)
Native /voice Command Setup
1. Update Claude Code to the Latest Version
Voice Mode requires Claude Code version 1.0.33 or later (the March 2026 release). Run the update command to ensure the feature is available:
claude update
Verify the version:
claude --version
If the version number is 1.0.33 or higher, the /voice command is available. No additional feature flags or beta opt-ins are required.
2. Start a Claude Code Session and Activate Voice Mode
Launch Claude Code in any project directory as usual:
claude
Once the session is active, type:
/voice
The terminal displays a waveform indicator and the message "Voice Mode active — hold spacebar to speak." The microphone permission prompt may appear on first use (macOS and Linux handle this at the OS level; grant the terminal application microphone access when prompted).
3. Speak Instructions Using Push-to-Talk
Hold the spacebar and speak clearly. Release the spacebar when finished. Claude Code transcribes the audio, displays the transcription in the terminal for confirmation, and processes the instruction through its standard agentic pipeline.
For best results:
- Speak at a natural pace — the model handles normal conversational speed
- State the full instruction in one utterance when possible
- Use technical terms naturally ("refactor the useState hook") — no need to spell out code terms
- Pause briefly before releasing the spacebar to ensure the final words are captured
4. Mix Voice and Text Input Freely
Voice Mode does not lock out keyboard input. At any point during a Voice Mode session, type a message normally and press Enter. Claude Code processes it identically. Switch back to voice by holding the spacebar again. The session context remains unified across both input methods.
To exit Voice Mode entirely, type /voice again to toggle it off, or type /exit to end the session.
5. (Optional) Set Up VoiceMode MCP Server for Two-Way Voice
For developers who want Claude to speak its responses aloud, install the VoiceMode MCP server:
# Install the MCP server globally
npm install -g voicemode-mcp
# Or add to your project's MCP configuration
claude mcp add voicemode -- npx voicemode-mcp
Alternatively, add the server directly to .claude/settings.json:
{
"mcpServers": {
"voicemode": {
"command": "npx",
"args": ["voicemode-mcp"],
"env": {
"VOICE_MODEL": "default",
"SILENCE_THRESHOLD": "1500",
"LANGUAGE": "auto"
}
}
}
}
Once configured, Claude Code gains access to VoiceMode MCP tools. The conversation_mode tool enables continuous bidirectional voice dialogue. The speak tool converts any of Claude's text responses into audio output. The listen tool captures speech input through the MCP path rather than the native /voice command.
Both native /voice and VoiceMode MCP can coexist in the same session. Native handles input; MCP handles output. This combination gives developers spoken instructions in and spoken responses out.
Native Voice vs. VoiceMode MCP: When to Use Each
Two distinct approaches to voice interaction exist in the Claude Code ecosystem. They solve different problems and suit different workflows.
| Feature | Native /voice | VoiceMode MCP Server |
|---|---|---|
| --------- | ----------------- | --------------------- |
| Installation | Built-in (v1.0.33+) | Requires npm install + MCP config |
| Input method | Push-to-talk (spacebar) | Push-to-talk or continuous listening |
| Output | Text only (terminal) | Text + speech (Claude talks back) |
| Language support | 20 languages | Depends on local TTS/STT engine |
| Wake word | None (spacebar activated) | Configurable custom wake word |
| Hands-free operation | No (requires spacebar) | Yes (voice activity detection) |
| Privacy | Audio sent to Anthropic for transcription | Local processing (on-device STT/TTS) |
| Latency | Low (~300ms transcription) | Variable (depends on local hardware) |
| Setup complexity | Zero configuration | Moderate (MCP server + audio dependencies) |
| Custom voices | N/A | Multiple TTS voice options |
| Best for | Quick voice input during coding | Fully conversational coding sessions |
Use native /voice when:
- Speed matters more than conversation flow. The native command has lower latency and zero setup overhead.
- Working in environments where Claude speaking aloud is impractical (open offices, shared spaces).
- The primary need is faster input, not bidirectional dialogue.
- Privacy preferences favor Anthropic's server-side processing over local audio handling.
Use VoiceMode MCP when:
- Hands-free operation is essential (RSI, mobility constraints, working away from the keyboard).
- The workflow benefits from hearing Claude's responses (debugging walkthroughs, architecture discussions while whiteboarding).
- Working solo in a private environment where audio output is not disruptive.
- Custom wake words or continuous listening mode suits the development style.
Use both together when:
- Native
/voicehandles input (lower latency, more reliable transcription). - VoiceMode MCP
speaktool handles output (Claude reads responses aloud). - The developer wants the best of both worlds: fast speech input with audible responses.
Best Use Cases for Voice-Driven Development
Architecture Discussions
Voice Mode excels when developers need to describe system architecture, data flows, or design decisions. These conversations are inherently verbal — most architecture discussions happen at whiteboards, not keyboards. Voice Mode brings that natural communication style into the coding tool.
A typical flow: the developer holds the spacebar and says "Let's design the data model for the notification system. We need three entities — a notification, a notification preference, and a delivery channel. Notifications should reference the user who triggered them, the user receiving them, the event type, and a JSON payload. Preferences should map users to channels with per-event-type overrides. Delivery channels are email, push, SMS, and in-app."
Claude Code processes that spoken architecture description exactly as it would a typed one — generating schema files, TypeScript interfaces, or database migration scripts depending on the project context. The difference is speed. That instruction takes roughly 15 seconds to speak and over a minute to type. Over a session of architectural design, the accumulated time savings are substantial.
Debugging Walkthroughs
Rubber duck debugging has always been verbal. Voice Mode formalizes the practice by letting developers narrate their debugging process to an agent that can actually act on the narration.
"The API returns a 500 error when the user has no profile picture. I think the issue is in the avatar URL resolver — it probably tries to call .toString() on a null value. Check the resolveAvatarUrl function in the user service and trace where it handles null inputs."
This kind of stream-of-consciousness debugging narration is natural to speak but tedious to type. Voice Mode captures the developer's thought process at the speed of speech, and Claude Code responds by actually checking the file, finding the null reference, and proposing a fix.
Code Review Narration
Reviewing pull requests benefits from Voice Mode in two directions. Developers reviewing someone else's code can speak their observations: "In the payment handler, the error case on line 47 swallows the exception without logging it. Flag that. Also, the retry logic on line 82 doesn't have exponential backoff — it should."
Claude Code processes these spoken review notes and can generate formal review comments, create GitHub PR review submissions, or apply the suggested fixes directly. The reviewer never touches the keyboard, which is particularly valuable when reviewing code on a secondary monitor while referencing documentation on the primary screen.
Accessibility-First Workflows
For developers who rely on Voice Mode as a primary input method rather than a convenience feature, the most impactful use cases are the everyday ones: navigating codebases, reading files, making edits, running tests, and managing Git workflows.
Voice Mode combined with a screen reader creates a complete audio loop for visually impaired developers. Combined with VoiceMode MCP's speech output, it creates a fully voice-driven development environment that requires no visual interaction with the terminal at all.
Developers recovering from hand injuries report that Voice Mode eliminates the binary choice between working through pain and taking extended leave. The ability to continue coding productively while healing — issuing the same complex multi-step instructions that would require hundreds of keystrokes — makes voice a genuine accessibility tool, not just an input preference.
Limitations and Workarounds
Transcription Accuracy in Noisy Environments
Voice Mode performs best in quiet environments. Background noise — office chatter, music, mechanical keyboards nearby — degrades transcription accuracy, particularly for technical terms.
Workaround: Use a directional microphone or headset with noise cancellation. The audio quality of the input signal has a direct impact on transcription accuracy. A $30 headset microphone significantly outperforms a laptop's built-in array microphone in noisy conditions.
No Inline Code Dictation by Punctuation
Developers cannot dictate code character-by-character with punctuation commands ("open paren," "semicolon," "close bracket"). Voice Mode processes natural language instructions, not stenographic code dictation.
Workaround: Describe the code's intent instead of its syntax. "Add a try-catch block around the database query that logs the error and returns a 500 response" produces better results than attempting to dictate the exact syntax. For precise character-level edits, switch to keyboard input — Voice Mode does not lock out text input at any point.
Latency on Long Utterances
Long spoken instructions (30+ seconds of continuous speech) introduce noticeable latency between releasing the spacebar and seeing the transcription. The audio must be fully uploaded and transcribed before Claude Code begins processing.
Workaround: Break long instructions into multiple shorter utterances. Speak one logical step, release the spacebar, wait for transcription, then speak the next step. This also gives the developer a chance to verify the transcription before Claude Code acts on it.
VoiceMode MCP Audio Dependencies
The VoiceMode MCP server requires local audio libraries (PortAudio, Sox, or platform-specific equivalents) that may not be installed on minimal server environments or Docker containers.
Workaround: VoiceMode MCP is designed for local development machines, not headless servers. For remote development, use native /voice (which only requires a local microphone) rather than VoiceMode MCP. If VoiceMode MCP is required in a container, include the audio dependencies in the container's base image.
No Voice-Triggered Permission Approvals
Claude Code's permission system (the "allow/deny" prompts for tool execution) still requires keyboard interaction. Voice Mode cannot approve or reject tool-use permissions by speech.
Workaround: Configure hooks to auto-approve trusted operations, reducing the number of keyboard-required permission prompts. Alternatively, use Claude Code's --allowedTools flag to pre-approve specific tool categories for the session.
Accented Speech and Non-Standard Pronunciation
While 20 languages are supported, significant accent variation within a single language can reduce transcription accuracy. Regional dialects, code-switching between languages mid-sentence, and non-standard pronunciation of technical terms occasionally produce incorrect transcriptions.
Workaround: Speak technical terms slightly more deliberately than natural conversation. If a specific term is consistently mis-transcribed, use the phonetic spelling pattern ("the function is called K-A-R-D-D dash auth") or add the term to subsequent instructions as a typed correction so the session context includes the correct spelling.
Pros and Cons
Pros
- Faster instruction delivery. Complex multi-step instructions that take 60+ seconds to type can be spoken in 10-15 seconds. Over a full session, the time savings are measurable.
- Zero configuration for native mode. The
/voicecommand works immediately on Claude Code 1.0.33+ with no setup, API keys, or dependencies beyond a working microphone. - Full agentic capability. Voice input triggers the same tool suite and multi-step reasoning as typed input. No feature degradation.
- Twenty-language transcription. Automatic language detection with strong technical vocabulary handling across all supported languages.
- Accessibility as a first-class feature. Developers with RSI, motor impairments, or visual impairments gain a fully functional alternative to keyboard-driven CLI interaction.
- Seamless input mixing. Voice and text input coexist in the same session with shared context. No mode-switching penalties.
- VoiceMode MCP extends capabilities. The two-way voice option via MCP provides speech output, continuous listening, custom wake words, and hands-free operation for developers who need it.
Cons
- Text-only output in native mode. Claude Code does not speak responses through the native
/voicecommand. Developers must read terminal output or install VoiceMode MCP for audio responses. - Environment-dependent accuracy. Noisy environments degrade transcription quality. A quiet room or good headset microphone is effectively required for reliable use.
- No character-level code dictation. Voice Mode processes intent, not stenographic code. Developers who want to dictate exact syntax character-by-character need a different tool.
- Permission prompts require keyboard. The allow/deny security prompts cannot be approved by voice, breaking the hands-free flow at permission boundaries.
- VoiceMode MCP setup complexity. The two-way voice option requires MCP configuration, local audio library dependencies, and troubleshooting that non-technical users may find challenging.
- Latency on long utterances. Continuous speech exceeding 30 seconds introduces noticeable processing delay before Claude Code begins acting on the instruction.
- Privacy considerations. Native
/voicesends audio to Anthropic's servers for transcription. Developers working under strict data governance policies should verify compliance before using voice input on sensitive codebases.
Frequently Asked Questions
Is Claude Code Voice Mode free?
Claude Code Voice Mode is included in all Claude Code subscription tiers at no additional cost. The /voice command is a native feature of Claude Code version 1.0.33 and later — there is no separate Voice Mode subscription, usage fee, or per-minute charge. Standard Claude Code usage limits and billing apply to the underlying model interactions, but the voice input layer itself adds no cost. The VoiceMode MCP server is open-source and free.
How do I activate Voice Mode in Claude Code?
Type /voice in any active Claude Code session running version 1.0.33 or later. The terminal displays a waveform indicator and instructions to hold the spacebar to speak. No configuration files, API keys, or additional installations are required. To deactivate, type /voice again. The toggle is instant and preserves all session context.
Does Voice Mode work with Claude Code hooks and skills?
Voice input feeds into the same processing pipeline as typed input. All configured hooks — PreToolUse, PostToolUse, Notification, Stop — fire identically regardless of whether the triggering instruction was typed or spoken. Skills loaded into the session context apply to voice-initiated actions with no difference in behavior. A spoken instruction that triggers a file edit still runs through any PostToolUse hooks configured for formatting or testing.
What languages does Claude Code Voice Mode support?
Twenty languages at launch: English, Spanish, French, German, Portuguese, Italian, Dutch, Russian, Chinese (Mandarin), Japanese, Korean, Arabic, Hindi, Turkish, Polish, Swedish, Norwegian, Danish, Finnish, and Czech. Language detection is automatic — developers do not need to pre-select a language. Multilingual conversations within a single session are supported, and technical terms (function names, library names, code keywords) are preserved accurately across all supported languages.
Can I use Voice Mode for pair programming?
Voice Mode processes audio from a single microphone source, so it captures whoever is speaking. For in-person pair programming, one developer acts as the "voice driver" while the other observes. For remote pair programming over screen-sharing tools, the remote participant cannot directly use the host's Voice Mode — but they can type instructions into the same session while the host uses voice. The mixed input model (one developer on voice, one on keyboard) works naturally within Claude Code's unified context.
What is the difference between native /voice and VoiceMode MCP?
Native /voice is built into Claude Code and provides speech-to-text input only. It uses push-to-talk (spacebar), supports 20 languages, and requires zero setup. VoiceMode MCP is a community-built MCP server that adds two-way voice — Claude speaks responses aloud. It supports continuous listening, custom wake words, and runs speech processing locally. Native /voice is best for quick input; VoiceMode MCP is best for fully hands-free conversational coding. Both can run simultaneously in the same session.
Does Voice Mode work in remote/SSH sessions?
Voice Mode requires microphone access on the local machine running the terminal. For local terminal sessions, this works directly. For SSH sessions into remote servers, the microphone input still comes from the local terminal emulator — the audio is captured locally and the transcription is processed server-side by Anthropic. This means Voice Mode functions in SSH sessions as long as the local terminal application has microphone permissions. VoiceMode MCP, which requires local audio libraries on the machine running Claude Code, does not work in remote SSH scenarios without additional audio forwarding configuration.
How accurate is voice transcription for technical terms?
Transcription accuracy for technical vocabulary — function names, library names, programming keywords, CLI flags — is significantly higher than general-purpose dictation tools. Anthropic's transcription model is trained on developer-oriented speech data, so terms like useState, kubectl, querySelector, and package.json are recognized without spelling them out. Accuracy is highest in quiet environments with a good microphone. Unusual or project-specific identifiers (custom function names, internal API endpoints) may occasionally require phonetic spelling on first use, after which the session context improves subsequent recognition.
Related Articles
- Claude Skills vs. MCP Servers vs. Plugins: What's the Difference?
- Claude Code Hooks: Complete Guide with 15 Ready-to-Use Examples
- Browse All MCP Servers
- Browse All Claude Skills
