What it is

Assessment without assumptions

Most documents get evaluated informally. Someone reads them, forms a view, and either approves them or sends them back with comments. That process works, up to a point. What it rarely produces is a consistent, repeatable assessment that separates what is genuinely strong from what merely looks polished.

The Document Quality Evaluator changes that. It provides a structured framework for assessing a document against eight defined quality dimensions, producing a scored evaluation that is honest, specific, and grounded in what the document actually contains, not in assumptions about how it was made or who made it.

The framework is designed to work without access to the development process. Whether you are the document's author reviewing your own work before sharing it, a reviewer assessing a document you have been handed, or an independent assessor with no knowledge of how it was produced, the evaluation draws solely from what the document reveals about itself.

Author self-review

You produced the document and want to assess it on its own terms, setting aside what you know about how it was developed. The most common use case before submission, sharing, or publication.

Assigned reviewer

You have been given the document to assess. You may or may not have development context. The framework allows you to choose whether to use that context or evaluate in isolation, and records which approach was taken.

Independent assessor

You have no development knowledge and may be assessing whether AI was involved in the document's production. In this position, the framework's AI Voice measure operates as a detection and characterisation tool rather than a quality score.


The framework

Eight dimensions of quality

Each dimension targets a distinct aspect of quality. Every dimension is scored on a five-band scale from Insufficient to Exemplary.

Dimension 01
Fit to Context

Does the document communicate its own purpose, audience, and constraints, and then consistently serve them?

Dimension 02
Evidence and Grounding

Are claims supported by evidence? Are specific assertions appropriately qualified where verification is not demonstrated?

Dimension 03
Analytical Depth

Given what this document appears to set out to do, does it contain the level of analytical work that purpose requires?

Dimension 04
Purposeful Structure

Does the structure serve the reader, guiding them efficiently toward understanding or decision, or does it impose form without function?

Dimension 05
Appropriate Register

Is the document's voice, formality, and language level consistent, coherent, and appropriate to its implied audience?

Dimension 06
Critical Integrity

Does the document say what its evidence actually supports? Are claims proportionate, uncertainty acknowledged, and conclusions honestly drawn?

Dimension 07
Internal Consistency

Does the document hold together as a coherent whole? Are claims consistent across sections, and does the conclusion follow from the argument made?

Dimension 08
Completeness against Evident Purpose

Does the document actually do what it sets out to do? Assessed against the purpose the document reveals about itself, does it arrive where it should?


How it works

Structure, modes, and scoring

Before you begin

Seven opening questions

Every evaluation begins with seven questions. They establish your relationship to the document, whether AI involvement is known, the document's type and developmental stage, what context is available beyond the document itself, and whether you want a full or light-touch review.

The answers calibrate how the framework is applied and are recorded in the evaluation report.

Operating modes

Inference and informed

The framework operates in one of two modes, determined by how much context is available. In inference mode, all scores are based solely on what the document reveals. In informed mode, contextual information you provide is used alongside the document.

Both modes are rigorous. The report always declares which was applied.

Scoring

The five-band scale

Scores reflect what is actually present, not what was intended.

Insufficient 0 — 2
Partial 3 — 4
Adequate 5 — 6
Capable 7 — 8
Exemplary 9 — 10

Adequate means the document passed a minimum threshold and nothing more. Capable is genuinely strong but not excellent. Exemplary means nothing more could reasonably be asked of the document at this standard.


Evaluate a document

Two ways to evaluate

Choose the evaluation route that suits your purpose. Both routes produce a scored evaluation across all eight dimensions.

Self-assess
Evaluate it yourself

Work through all eight dimensions using the structured evaluation form. Select a score for each dimension, add notes where useful, and download a formatted PDF report at the close. No prior experience of the framework is required; the form provides the structure while you bring the judgement.

Open access, no login required
Start self-assessment
AI-assessed
Let the AI evaluate it

Provide your document and answer the seven opening questions. The AI applies the full framework, scores each dimension against the defined criteria, and returns a complete evaluation report. Available as a conversational evaluation or as a formatted PDF record.

Access requires authentication
AI assessment