AI Prompt Guide: Teaching AILANG to Language Models
Purpose: This document points to the canonical AILANG teaching prompt for AI models.
KPI: One of AILANG's key success metrics is "teachability to AI" - how easily can an LLM learn to write correct AILANG code from a single prompt?
Canonical Prompt (v0.4.4)
The official AILANG teaching prompt is maintained at:
prompts/v0.4.4.md
This prompt is:
- Validated through eval benchmarks - Tested across GPT-5, Gemini 2.5 Pro, Claude Sonnet 4.5
- Up-to-date with latest features - Record updates, auto-import prelude, syntactic sugar
- Versioned with SHA-256 hashing - Reproducible eval results
- Actively maintained - Updated as language evolves
Quick Reference
Current version: (Teaching prompt: v0.4.4)
Core Features:
- Module execution with effects
- Recursion (self-recursive and mutually-recursive)
- Block expressions (
{ stmt1; stmt2; result }) - Records (literals + field access + updates)
- Multi-line ADTs
type Tree = | Leaf | Node - Record update syntax
{base | field: value} - Auto-import std/prelude - No imports needed for comparisons
- Syntactic sugar:
::cons,->function types,f()zero-arg calls - Effects: IO, FS, Clock, Net, Env
- Type classes, ADTs, pattern matching
- REPL with full type checking
Known Limitations:
- See Implementation Status for current limitations
- See LIMITATIONS.md for workarounds
For complete syntax guide, see prompts/v0.4.4.md
Using the Prompt
For AI Code Generation
When asking an AI model (Claude, GPT, Gemini) to write AILANG code, provide the full prompt from prompts/v0.4.4.md.
Example usage:
I need you to write AILANG code to solve this problem: [problem description]
First, read this AILANG syntax guide:
[paste contents of prompts/v0.4.4.md]
Now write the code.
For Eval Benchmarks
The eval harness automatically loads the correct prompt version from prompts/versions.json:
# benchmarks/example.yml
id: example_task
languages: ["ailang", "python"]
# Prompt version is auto-resolved from prompts/versions.json
task_prompt: |
Write a program that [task description]
See benchmarks/README.md for details.
Current Prompt
Version: v0.4.4 - View full prompt
Core Features Documented:
- Multi-line ADTs:
type Tree = | Leaf | Node - Record updates:
{base | field: value} - Auto-import std/prelude
- Syntactic sugar:
::cons,->types,f()zero-arg calls - Full module system with effects (IO, FS, Clock, Net, Env)
- Pattern matching, recursion, type classes
Why prompt quality matters:
- Better AI code generation
- Reproducible eval results
- Consistent teaching across models
Eval Results
Current success rates:
- See Benchmark Dashboard for latest metrics
- Best model: Claude Sonnet 4.5 (consistently highest success rates)
- Results updated after each release
Key Insights:
- Teaching prompt quality directly impacts AI success rates
- Multi-model testing reveals universal vs model-specific patterns
- Iterative prompt improvements correlate with better code generation
Contributing Improvements
If you find ways to improve the AILANG teaching prompt:
-
Test your changes with the eval harness:
ailang eval --benchmark all --model gpt-4o-mini -
Measure impact:
tools/compare_prompts.sh old_version new_version -
Create new prompt version following the versioning system in
prompts/versions.json -
Document changes with SHA-256 hash and notes in
prompts/versions.json
See Also
- CLAUDE.md - Instructions for AI assistants working on AILANG development
- examples/ - Working AILANG code examples
- Language Reference - Complete AILANG syntax guide
- benchmarks/ - Eval harness benchmark suite