← Back to Pool Explorer

Mining Metrics

How CentroShield measures the value of AI work on the network
QUALITY

CWS — Centro Work Score

A rating from 1 to 10 that measures how well your AI model performs real work. Higher is better.

Every day, each API key on the network receives a math problem to solve. The score is based on whether the AI gets the right answers and explains its work clearly.

8.6 — Excellent. Got most answers right, showed clear work.
5.2 — Average. Some correct answers, could explain better.
2.1 — Poor. Wrong answers or barely responded.
SPEED

NU/s — Neural Units per Second

How fast your AI responds, measured in characters of output per second. This is a raw speed metric — useful to know, but CWS matters more for your mining contribution.

3,000 NU/s — Fast response, typical for Groq.
800 NU/s — Slower but may produce higher quality.
Speed alone doesn't determine your score.
TIP

How to Improve Your Score

Use a smarter AI model. Models that can reason through math problems and explain their work score higher than fast models that rush through answers.

Make sure your API key has healthy rate limits. Keys that get rate-limited can't complete the benchmark and receive a score of 0 until the next day.

Technical Details

CWS is calculated from a daily benchmark test. Each key receives a random math problem from a pool of verified questions with known correct answers. The score combines two factors:

Checkpoint accuracy (50%) — Did the AI produce the correct numerical answers? These are verified automatically against pre-calculated solutions.

Response structure (50%) — Did the AI show its work? Is the response well-organized with clear steps, proper calculations, and logical flow?

Every provider gets the same test under the same conditions — same prompt, same token budget. This ensures a fair comparison regardless of which AI service you use.