
# .env - drop-in replacement for any OpenAI-compatible tool
OPENAI_BASE_URL=https://api.inworld.ai/v1
OPENAI_API_KEY=your-inworld-api-key
OPENAI_MODEL=inworld/vibe-coding-by-task
# Works with: Cursor, Claude Code, Codex CLI, Aider,
# Continue, LangChain, Vercel AI SDK, and more.
#
# Use /code, /review, or /docs prefixes in your prompts
# to route to the best model for each task.Hundreds
Hundreds
# beforeOPENAI_BASE_URL=https://api.openai.com/v1OPENAI_API_KEY=sk-...# after — 3 minutesOPENAI_BASE_URL=https://api.inworld.ai/v1OPENAI_API_KEY=your-inworld-api-keyOPENAI_MODEL=inworld/vibe-coding-by-task# beforeOPENAI_BASE_URL=https://api.openai.com/v1OPENAI_API_KEY=sk-...# after — 3 minutesOPENAI_BASE_URL=https://api.inworld.ai/v1OPENAI_API_KEY=your-inworld-api-keyOPENAI_MODEL=inworld/vibe-coding-by-taskWorkflow never stops.
Workflow never stops.
Provider rates. Nothing added.
0%
Provider rates. Nothing added.
0%
Best model per task, inline.
Best model per task, inline.
## Configure Cursor
1. Open Cursor Settings (Cmd+Shift+J or Ctrl+Shift+J)
2. Go to "Models" section
3. Click "Add Model"
4. Set:
- Model name: inworld/vibe-coding-by-task
- API Base: https://api.inworld.ai/v1
- API Key: your-inworld-api-key
5. Select "inworld/vibe-coding-by-task" as your default model
Alternatively, use the OpenAI override:from openai import OpenAI
# Before:
# client = OpenAI(api_key="sk-...")
# After:
client = OpenAI(
base_url="https://api.inworld.ai/v1",
api_key="your-inworld-api-key",
)
# Default route - MiniMax M2.5
response = client.chat.completions.create(
model="inworld/vibe-coding-by-task",
messages=[{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content)
# Use /code prefix for selected coding model:
stream = client.chat.completions.create(
model="inworld/vibe-coding-by-task",
messages=[{"role": "user", "content": "/code Implement a connection pool with retry logic"}],
stream=True,
)
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")