The charting library you use is being recommended, or ignored, by AI. And most chart libraries have zero strategy for influencing which way that goes.
When a developer asks Claude, ChatGPT, or Copilot to "generate a chart for this data," the LLM picks a library based on its training data. Right now, that overwhelmingly means Chart.js, Matplotlib, or D3.js. Not because they're the best tools for every job, but because AI models were trained on millions of code examples using those libraries, and the libraries themselves have done nothing to change the equation.
The llms.txt standard, and tools like Context7, are changing this. They give library maintainers a direct line to AI systems at inference time. If you build or maintain a charting library and you're not thinking about this, you're already losing ground.
What Is llms.txt and Why Should Chart Libraries Care?
The llms.txt standard, proposed by Jeremy Howard (co-founder of Answer.AI and fast.ai), is deceptively simple. It's a markdown file placed at the root of your website that provides LLM-friendly content about your project. Think of it as robots.txt, but for AI reasoning rather than crawling.
Without llms.txt, LLMs default to training data that may be years out of date. With it, they get current, accurate documentation at inference time.
The specification defines two files:
| File | Purpose | Size |
|---|---|---|
/llms.txt |
Lightweight index with descriptions and links to detailed docs | Small, fits in context window |
/llms-full.txt |
Complete documentation in a single file | Can be very large, used with RAG |
The format is plain markdown. Here's what a minimal llms.txt looks like:
# My Charting Library
> A brief description of the library for LLMs
Key notes and important context here.
## Core Documentation
- [Getting Started](https://example.com/docs/getting-started): Quick setup guide
- [API Reference](https://example.com/docs/api): Complete API documentation
## Optional
- [Migration Guide](https://example.com/docs/migration): For upgrading versions
The critical difference from sitemap.xml is intent. Sitemaps help crawlers index pages. llms.txt helps AI models understand what your library does, how it works, and how to write correct code with it. At inference time. When a developer is actually asking for help.
The Current State: Chart Libraries Are Asleep at the Wheel
I checked the major JavaScript charting libraries. The results are not great.
| Library | Has /llms.txt? | Has /llms-full.txt? | On Context7? | AI-Optimized Docs? |
|---|---|---|---|---|
| Chart.js | No | No | Limited | No |
| D3.js | No | No | Limited | No |
| Highcharts | In development | In development | Yes | Active work |
| ECharts | No | No | Limited | No |
| Plotly.js | No | No | Limited | No |
| Recharts | No | No | Limited | No |
| ApexCharts | No | No | No | No |
This is a massive blind spot. These libraries have incredible documentation for humans. Detailed API references, interactive examples, migration guides. But none of that is optimized for how AI models actually consume and use information.
When an LLM needs to write charting code, it relies on:
- Training data (static, often outdated, heavily biased toward whatever was on GitHub/Stack Overflow years ago)
- Web search results (messy HTML, ads, navigation chrome, cookie banners)
- Whatever the user pastes into the prompt
There's no step where the LLM can say "let me check the library's own AI-friendly documentation for the current API."
What This Means in Practice
The consequences are real and measurable. When you ask an LLM to generate a chart, here's what actually happens:
Chart.js: Confident but outdated
The model has seen millions of Chart.js examples in training data. It generates code confidently. But it often generates code for Chart.js v2 or v3 syntax even though v4 has been out since 2023 and introduced breaking changes. Without an llms.txt pointing to current docs, the model has no way to self-correct.
D3.js: Powerful but version-confused
LLMs generate D3 code frequently, but the API surface is enormous. The model often mixes up D3 v5, v6, and v7 patterns. Selection behavior, module imports, and data join syntax have all changed across versions. Again, no llms.txt means no course correction.
Highcharts: Under-recommended despite being the right tool
The model recommends it less often than the free alternatives, not because it's worse, but because there's less open-source example code in the training data. A well-crafted llms.txt could change this by making the library's capabilities visible at inference time.
ECharts: Hidden capabilities
Similar situation. Despite being one of the most powerful charting libraries (WebGL rendering, 3D charts, massive dataset handling), LLMs default to simpler alternatives because those alternatives have more training data presence.
LLM recommendations are heavily biased toward libraries with the most training data, not necessarily the best tool for the job
Context7: The MCP Approach to Documentation
While llms.txt is a static file on your website, Context7 takes a more active approach. Built by Upstash, Context7 is an MCP (Model Context Protocol) server that indexes library documentation and serves version-specific snippets on demand.
Here's how it works:
- A developer writes a prompt in Cursor, Claude, or another AI coding tool
- They add
use context7to the prompt - The MCP server resolves the library and fetches relevant, up-to-date documentation
- The LLM gets clean, version-specific code examples injected into its context
For chart libraries, this is a game-changer. Instead of the LLM guessing which API to use based on training data from 2022, it gets the actual current documentation.
The difference between llms.txt and Context7 is like the difference between a static FAQ page and a live chat agent. Both are useful, but they serve different interaction patterns.
| Feature | llms.txt | Context7 (MCP) |
|---|---|---|
| Deployment | Static file on your server | External service + MCP integration |
| Update frequency | When you update the file | Continuously re-indexed |
| Version awareness | Manual (you write which versions) | Automatic version matching |
| Integration | Any LLM that can fetch URLs | MCP-compatible clients (Cursor, Claude, etc.) |
| Token efficiency | You control what's included | Smart retrieval with token limits |
| Maintenance effort | Low | Medium (need to register and maintain) |
The smart play: Do both. llms.txt captures the broad audience of LLMs that can read web content. Context7 captures the growing segment of developers using MCP-enabled coding tools.
What a Good llms.txt Looks Like for a Chart Library
Most llms.txt files focus on API references. For a charting library, you need something different. Here's what to include:
1. Capabilities summary
LLMs need to know whether your library can even do what the developer is asking. If someone asks for a Gantt chart and your library supports it, this needs to be stated explicitly and early in the file.
2. Quick-start code patterns
Not the full API reference. The most common patterns: how to create a basic line chart, how to add data, how to customize axes. These are the patterns LLMs will generate 90% of the time.
3. Configuration format
Is it JSON options? A builder pattern? JSX components? The LLM needs to know the shape of the configuration before it can generate correct code.
4. Data format expectations
How does data get into the chart? Array of objects? Array of arrays? CSV? API endpoint? This trips up LLMs constantly because different libraries expect fundamentally different data shapes.
5. Version-specific breaking changes
If your v4 API is incompatible with v3, say so explicitly. LLMs will generate the wrong version's code unless they're told not to.
6. What makes your library different
This is the competitive angle. If your library handles 10 million data points via WebGL while the alternatives choke at 10,000, say that. If you have specialized financial chart types, say that. Give the LLM a reason to recommend your library when it's the right tool.
A well-structured llms.txt for a chart library prioritizes what LLMs need most: capabilities, code patterns, and data formats
Here's a skeleton for a chart library llms.txt:
# [Library Name]
> [One-sentence description including key differentiator]
[Library Name] is a [framework/language] charting library.
Current stable version: [X.Y.Z].
License: [MIT/Apache/Commercial].
Install: [npm install command or CDN link].
Important: [Critical notes about version differences, common mistakes, etc.]
## Quick Start
- [Basic Chart](url): Minimal code to render a chart
- [Configuration Reference](url): Full options/config object reference
- [Data Formats](url): Supported data input formats
## Chart Types
- [Line & Area Charts](url): Time series, trends, comparisons
- [Bar & Column Charts](url): Categorical comparisons
- [Pie & Donut Charts](url): Part-to-whole relationships
- [Scatter & Bubble](url): Correlation analysis
- [Specialized Charts](url): [List any unique types: Gantt, Stock, Maps, etc.]
## Framework Integration
- [React](url): React wrapper/components
- [Vue](url): Vue integration
- [Angular](url): Angular integration
## Advanced
- [Large Datasets](url): Performance with big data
- [Real-time Updates](url): Live/streaming data
- [Export](url): PNG, SVG, PDF export
- [Accessibility](url): Screen reader support, WCAG compliance
## Optional
- [Migration Guide](url): Upgrading from previous versions
- [Themes & Styling](url): Customizing appearance
- [API Reference](url): Complete API documentation
The MCP Connection: Why This Matters Even More Now
The Model Context Protocol is creating a new channel for tools to feed context to AI models. Chart libraries that build MCP servers alongside their llms.txt files get a double advantage:
- Static discoverability (llms.txt): Any LLM with web access can find and use the documentation
- Dynamic integration (MCP): AI coding assistants can pull exact, version-specific code patterns in real-time
There are already MCP servers for chart generation. AntV's mcp-server-chart supports 26+ chart types and works with Claude, Cursor, and other MCP clients. But the major Western charting libraries haven't built their own MCP servers yet.
Open opportunity: The first major chart library to ship both a well-crafted llms.txt and an MCP server will have a significant advantage in the AI-assisted development era. When a developer asks Claude or Copilot "what's the best chart library for my use case," the one with AI-optimized documentation will win the recommendation.
The Business Case: AI Recommendations Drive Adoption
This isn't theoretical. AI-driven recommendations are already influencing which libraries developers choose.
When someone asks ChatGPT or Claude "help me add a chart to my React app," the suggested library becomes the one the developer actually installs. Unlike Google search results, where developers might compare 5–10 options, AI recommendations often produce a single answer. If your library isn't the one being recommended, you're losing developer adoption at scale.
AI collapses the library selection funnel from a multi-step comparison to a single recommendation. If you're not the one being recommended, you're invisible.
For open-source libraries, this means fewer contributors and a shrinking ecosystem. For commercial libraries, this directly impacts revenue. Every time an LLM recommends Chart.js when Highcharts would have been the better tool, that's a lost potential customer who never even evaluated the product.
The Semrush study on AI Overviews found that about 60% of keywords triggering AI-generated answers have 100 or fewer monthly searches. These are exactly the long-tail developer queries like "best chart library for real-time financial data" or "JavaScript stock chart with WebGL" where specialized libraries should win but often don't because the AI doesn't know about their capabilities.
What Chart Library Maintainers Should Do Right Now
Step 1: Create your llms.txt today
It takes an hour. Use the skeleton above. Focus on what makes your library unique and include working code examples for the most common chart types.
Step 2: Register on Context7
Get your documentation indexed so MCP-enabled tools can serve it to developers.
Step 3: Create markdown versions of your documentation
Both llms.txt and Context7 work best with clean markdown. If your docs are HTML-heavy single-page apps, the AI models can't easily consume them.
Step 4: Think about an MCP server
This is the bigger investment, but it's where the ecosystem is heading. An MCP server that can generate chart configurations from natural language descriptions is incredibly powerful for developer adoption.
Step 5: Monitor what LLMs say about your library
Ask Claude, ChatGPT, and Gemini "what's the best JavaScript chart library for [your strength]?" If you're not being recommended, your AI documentation strategy isn't working yet.
The Bottom Line
The era of SEO-optimized documentation is giving way to the era of LLM-optimized documentation. Chart libraries that adapt will be the ones developers use in 2026 and beyond. The ones that don't will gradually fade from AI recommendations, and therefore from developer adoption.
llms.txt isn't going to be a magic bullet. Google's John Mueller has confirmed that major search engines don't formally follow it yet. But the direction is clear. Documentation platforms like Mintlify, GitBook, and Fern are auto-generating llms.txt files. Yoast added llms.txt generation to WordPress. The standard is moving from proposal to infrastructure.
For chart libraries specifically, the stakes are high. Data visualization is one of the most common tasks developers delegate to AI assistants. The library that's most visible and most accurately represented in AI-generated code wins. That visibility starts with making your documentation AI-readable.
The file is small. The effort is minimal. The upside is enormous. Ship your llms.txt.