Vibe Coding
Vibe Coding means writing code by collaborating with an AI assistant — describing what you want in plain language and letting the AI generate, explain, or refactor the code for you. The Imbrace SDK ships an llms.txt file so any AI tool can instantly understand the SDK without hallucinating method names or argument shapes.
Setup
Before vibe coding, make sure the SDK is installed and your credentials are configured.
1. Install the SDK
npm install @imbrace/sdkpip install imbrace2. Store your credentials
Create a .env file in your project root. The SDK does not auto-read environment variables — you pass them to the constructor in step 3.
IMBRACE_API_KEY=your_api_key_hereIMBRACE_ORGANIZATION_ID=your_org_id_hereSee Authentication to learn when to use API Key vs Access Token, and Setup Guide for how to obtain an API key.
3. Initialize the client
import { ImbraceClient } from "@imbrace/sdk";
const client = new ImbraceClient({ apiKey: process.env.IMBRACE_API_KEY, organizationId: process.env.IMBRACE_ORGANIZATION_ID,});import osfrom imbrace import ImbraceClient
client = ImbraceClient( api_key=os.environ["IMBRACE_API_KEY"], organization_id=os.environ.get("IMBRACE_ORGANIZATION_ID"),)4. Grab llms.txt
Download or copy the file at https://imbraceltd.github.io/api-sdk/llms.txt and drop it into your AI tool (see How to use it below).
What is llms.txt?
llms.txt is a plain-text file (similar to robots.txt) that gives AI models a compact, accurate summary of a library — its clients, resources, authentication, and common patterns. When you paste it into an AI context window, the model already knows the SDK and can write correct code on the first try.
File URL: https://imbraceltd.github.io/api-sdk/llms.txt
How to use it
Claude (claude.ai or Claude Code)
- Open a new conversation.
- Paste the contents of
llms.txtat the top of your message, then describe your task:
<context>[paste llms.txt here]</context>
Write a TypeScript snippet that streams a chat response from assistant "asst_abc"and prints each text delta to the console.Cursor / VS Code Copilot
Add the URL to your AI context via @ docs or the equivalent “add context” feature in your IDE. Cursor supports @URL directly:
@https://imbraceltd.github.io/api-sdk/llms.txt
How do I upload a file and trigger embedding processing?Any other LLM
Copy the raw file content and paste it at the start of your prompt before asking your question. Most LLMs with a 32k+ context window can ingest the full file without summarisation loss.
Example prompts
Once the AI has the llms.txt context, try prompts like:
- “Show me how to create an AI assistant and stream a chat response in Python.”
- “Generate TypeScript code to list all embedding files and delete ones with status
error.” - “What’s the difference between
streamChatandstreamSubAgentChat?” - “Write an Express.js auth proxy for the Chat Client, following the Integrations guide pattern.”
Keep it up to date
The file is regenerated on every release. Re-fetch the URL if you upgrade the SDK to pick up new methods or changed signatures.