A rating from 1 to 10 that measures how well your AI model performs real work. Higher is better.
Every day, each API key on the network receives a math problem to solve. The score is based on whether the AI gets the right answers and explains its work clearly.
How fast your AI responds, measured in characters of output per second. This is a raw speed metric — useful to know, but CWS matters more for your mining contribution.
Use a smarter AI model. Models that can reason through math problems and explain their work score higher than fast models that rush through answers.
Make sure your API key has healthy rate limits. Keys that get rate-limited can't complete the benchmark and receive a score of 0 until the next day.
CWS is calculated from a daily benchmark test. Each key receives a random math problem from a pool of verified questions with known correct answers. The score combines two factors:
Checkpoint accuracy (50%) — Did the AI produce the correct numerical answers? These are verified automatically against pre-calculated solutions.
Response structure (50%) — Did the AI show its work? Is the response well-organized with clear steps, proper calculations, and logical flow?
Every provider gets the same test under the same conditions — same prompt, same token budget. This ensures a fair comparison regardless of which AI service you use.