Quirrely VIP — The Platform

The engine that
makes it possible.

The LNCP engine is the instrument Quirrely VIP is built on. Five dimensions. 110+ linguistic patterns. 19 polymarkers. This page explains how it works, why it measures what it measures, and why the same engine serves four fundamentally different use cases without losing precision in any of them.

10,510,100,501
Possible voice signatures
Five dimensions. 101 values each. No two voices share the same five.
24–72hr
Standard turnaround
From search parameters to scored profiles. No platform. No queue.
DaaS
Deliverable, not subscription
Priced by scope. No seats, no dashboards, no onboarding.
The origin

Why language has always
been measurable.

The founder spent 18 years writing — across categories, registers, and forms — before building Quirrely VIP. Writing at that volume produces an unusual sensitivity to the structural properties of language. Not just what words mean, but how they behave. The weight of a declarative sentence. The interpretive load placed on a reader. The variance between a writer in formal register and the same writer in casual register.

The question that founded the platform was simple: why is there no instrument for this? Sentiment analysis exists. Readability scores exist. Topic classification exists. But none of these measure the cognitive signature of the writing itself — the properties that make one brand sound like it means what it says and another sound like it is hedging on every channel.

Language carries structural properties that are independent of meaning. Those properties are quantifiable. The LNCP engine is the instrument built to quantify them.

Never retrofit a framework onto reality. Let reality define the framework.

The five dimensions did not come from a theory of brand voice. They came from the patterns that emerged when language was read at scale and the structural signals were isolated. Force, Stability, Mediation, Structure, and Range are descriptions of what is actually there — not categories imposed from outside.

How it works

Pattern detection.
Polymarker assignment.
Dimension scoring.

The LNCP engine runs in three passes. Each pass adds a layer of resolution. The output is a fingerprint — not a classification, not a sentiment score, not a readability grade. A precise cognitive signature of the writing.

01
Pattern detection
110+ linguistic patterns scan the corpus at the sentence and sub-sentence level. Zero patterns, operator patterns, scope patterns, informal markers, discourse markers, contraction patterns, high-intent markers — each fires independently against the text. The raw pattern signal is the input to the next pass.
02
Polymarker assignment
A polymarker is a named linguistic signal that emerges from combinations of base patterns. The engine assigns 19 polymarkers — CERTAINTY, CONDITIONALITY, CONTRAST, HEDGING, and 15 others — based on the pattern combinations detected. Active polymarkers light up per fingerprint. Every corpus returns a unique combination.
03
Dimension scoring
The polymarker combination and raw pattern signal score the five dimensions — Force, Stability, Mediation, Structure, Range — each on a continuous 0–100 scale. The result is a fingerprint: five numbers that together describe the cognitive signature of the writing with precision no single metric can achieve.
19 polymarkers
Each corpus activates a unique subset. The combination — not just the individual markers — is what defines the profile. 19 polymarkers produce 524,288 possible marker state combinations.
CERTAINTYCONDITIONALITYCONTRAST EMPHASISHEDGINGPOSSIBILITY FORMALINFORMALOPEN CLOSEDBALANCEDCOMPLEX SIMPLECONTRADICTORYHIGH MEDIUMLOWMINIMAL MODERATE
Multiply 524,288 marker states by continuous 0–100 scoring on five dimensions and the fingerprint space is effectively infinite. Two brands that appear similar on the surface will produce measurably different fingerprints.
The five dimensions

What each dimension
actually measures.

Each dimension is a continuous score from 0 to 100. They are not labels or archetypes. A Force score of 28 and a Force score of 83 are both valid — they describe different writing, not better or worse writing. The question is always: does the score match the intent?

Force
Assertiveness and declarative weight
Force measures how hard the language pushes. High Force writing makes declarations, asserts positions, and does not hedge. It moves toward the reader. Low Force writing qualifies, conditions, and opens space for the reader to decide. Neither is inherently stronger — a brand that leads with conviction scores high Force; a brand that invites rather than directs scores low. The problem is when a brand scores 83 on one channel and 4 on another. That gap is not range. It is fragmentation.
Channel variance — B2B tech brand
Homepage
83
Social
4
Campaign
61
79-point gap between homepage and social. Category norm: 18 pts. Every channel making a different promise to the same buyer.
Stability
Consistency of register and tone
Stability measures how consistently the register and tone hold across the full corpus. High Stability writing maintains a coherent voice throughout. Low Stability writing shifts register unpredictably. High Stability is not the same as high quality — a consistently casual brand can score high Stability. Stability is also the AI signal dimension: purely generated text scores abnormally high Stability because it lacks the natural register variation of human writing.
Stability contrast — human vs generated
Human writer
69
AI-generated
91
Stability 91 with Mediation 3 — the tell of generated text. Human writing produces uneven profiles. This evenness is the signal.
Mediation
Interpretive presence
Mediation measures how much the writer shapes meaning for the reader. High Mediation writing does not just present information — it interprets it, contextualises it, and makes the connection explicit. The writer is present in the text, doing work on behalf of the reader. Low Mediation writing delivers information and stops. A brand that scores Mediation 36 on its About page and Mediation 0 on its campaign copy is not making a deliberate choice. It has two different writers with two different assumptions about how much to give the reader.
Mediation across channels
About / Mission
36
Campaign copy
0
Mediation 36 is the highest single-channel score in the peer category. Mediation 0 on campaign means the brand stops at delivery and leaves the meaning to the buyer.
Structure
Syntactic complexity and architectural density
Structure measures the syntactic complexity and architectural density of the writing — how sentences are built, how many layers of clause are stacked, how much the syntax itself carries meaning. High Structure writing uses subordinate clauses and nested constructions to build complexity. Low Structure writing favours simple declarative forms. Structure is often the most consistent dimension across channels for a given brand — structural habits persist even when register shifts. When Structure shifts sharply across channels, it usually indicates different authors, not different intentions.
Structure — peer category comparison
Brand A
41
Brand B
39
Brand C
35
Structure is often the most stable dimension across a peer category. The differences that do appear are real — not noise.
Range
Lexical breadth and variability
Range measures how wide the vocabulary draws and how much it varies across the corpus. High Range writing pulls from a broad lexical field — specialist terminology alongside plain language, technical precision alongside narrative. Low Range writing stays within a narrow vocabulary set. Range is not the same as vocabulary size — it is a measure of variability and reach. A technical brand targeting enterprise buyers that scores low Range may be underselling its depth. The question is whether the Range score matches the positioning.
Range — positioning alignment
Enterprise SaaS
48
Consumer app
22
Range 48 vs 22 — the enterprise brand draws from a wider lexical field. Both are appropriate for the positioning. The question is whether each brand chose deliberately.
What the fingerprint reveals

Three things you cannot see
without measurement.

The fingerprint is not a score to optimise. It is an instrument for seeing what is already there — and making deliberate decisions about it.

Channel variance
Where your brand voice holds and where it drifts
Every channel in your corpus is scored independently. The variance between channels — across all five dimensions — is the primary output of a Signal analysis. A brand that holds on Stability but fragments on Force is not a brand with range. It is a brand with a briefing problem.
79 pts
Force gap — homepage vs social, B2B tech brand. Category norm: 18 pts.
Competitive gap
Where you sit in the category on every dimension
Competitive mode benchmarks your brand against up to five peers across all five dimensions simultaneously. The gaps that emerge are facts — not opinions about positioning. A Force gap of 20 points between you and the category leader is a strategic decision waiting to be made.
CERTAINTY
Dominant profile across four of five brands in the DaaS peer category.
AI signal
Where generated text has entered the corpus
The engine identifies three signal states: clean human writing, hybrid assisted drafting, and AI-generated text. Flat high Stability with near-zero Mediation is the pattern of generated text. The signal does not judge — it reports. What you do with the information is strategy.
Hybrid
Detected on homepage corpus. High Force, high Stability, low Range — assisted drafting is plausible.
How analysis is delivered

Two layers.
One read.

Every Quirrely product delivers analysis in two layers. The summary verdict fires automatically — a cross-unit synthesis shaped to your role. The deep read goes unit by unit, on demand. Together they give you the strategic read and the evidence in the same document.

Two-layer architecture
1
Summary verdict — fires automatically. Cross-unit synthesis, role-shaped, delivered at the top. Signal delivers a brand character read. Brief delivers a contact read. Fit delivers a pool verdict. Lens delivers a comparative read across all three outlets. Always visible without interaction.
2
Deep read — fires on demand. Per-unit analysis: per channel in Signal, per outlet in Lens, per candidate in Fit, per contact in Brief. Dimension-by-dimension, with a written read on each unit. The evidence behind the verdict.
One engine
Four products. The corpus changes. The measurement does not.
See the products →
Four use cases

Why the same engine serves
four different problems.

Brand copy, news copy, contact writing, and applicant writing are all language. All language carries the same structural properties. All of those properties are measurable on the same five dimensions. The corpus changes. The instrument does not.

Signal
The corpus is your brand
Homepage, social, campaign, sales copy, About page. Each channel scored independently. The question: does your brand voice hold across channels, and where does it sit relative to the category? Competitive mode adds peer brands.
Corpus: owned brand copy across channels. Competitive mode adds peer brands.
Lens
Now available
The corpus is the media
Three outlets, one topic, three editorial voices scored and compared. Tell us the story — the engine reads the voices underneath. Returned within 24 hours of your search parameters.
Corpus: editorial copy from 300+ global outlets, selected by beat and region.
See Lens →
Brief
The corpus is the person across the table
Any public writing — LinkedIn posts, bylined articles, published essays. The question: what is the cognitive signature of this person's writing, and how do I engage with it?
Corpus: minimum 300 words of public writing from the contact.
Fit
The corpus is both sides of the hire
Employer voice and applicant writing scored independently, then compared. The question: where do these two cognitive signatures align, and where do they diverge? Before the interview, not after.
Corpus: employer brand copy + applicant writing, minimum 300 words each.
In practice
Lens is the first live application of the LNCP engine. Three media profiled side by side. Five dimensions scored. Directional interpretation returned within 24 hours.
See Lens →

Delivered analysis.
Priced by scope.

We work from your brief. Analysis returned in 24–72 hours. Review within 7 days.

Book a conversation