TanStack
Search...
K
Auto
Log In
Start
RC
Start
RC
Router
Router
Query
Query
Table
Table
DB
beta
DB
beta
AI
alpha
AI
alpha
Form
new
Form
new
Virtual
Virtual
Pacer
beta
Pacer
beta
Hotkeys
alpha
Hotkeys
alpha
Store
alpha
Store
alpha
Devtools
alpha
Devtools
alpha
CLI
alpha
CLI
alpha
Intent
alpha
Intent
alpha
More Libraries
More Libraries
Builder
Alpha
Builder
Alpha
Blog
Blog
Maintainers
Maintainers
Partners
Partners
Showcase
Showcase
Learn
NEW
Learn
NEW
Stats
Stats
YouTube
YouTube
Discord
Discord
Merch
Merch
Support
Support
GitHub
GitHub
Ethos
Ethos
Tenets
Tenets
Brand Guide
Brand Guide
TanStack Intent
/
Registry
/
@nyrra/foundry-ai
@nyrra/foundry-ai
0.0.5-rc.2 (latest) — 1 skill
0.0.5-rc.1 — 1 skill
0.0.5-rc.0 — 1 skill
0.0.4 — 1 skill
0.0.3 — 1 skill
0.0.3-rc.3 — 1 skill
0.0.3-rc.2 — 1 skill
0.0.3-rc.1 — 1 skill
Copy install prompt
Thin Palantir Foundry provider adapters and model catalog for the Vercel AI SDK.
Skills
(1)
Skills
All Skills
1
foundry-ai-provider
Skills
1
History
foundry-ai-provider
Use when wiring @nyrra/foundry-ai into an app that should call Palantir Foundry LLM proxy endpoints through the AI SDK. Covers env setup, provider peer selection, alias vs RID routing, OpenAI compatibility rules, Google beta caveats, and when to avoid unverified Foundry-native runtimes.
46 lines