For graduate researchers in medicine & biomedicine

Catch the missing methods items reviewers reject for. Before you submit.

Click Scan methods section in your Google Doc or Overleaf project. In under two seconds, see which CONSORT, STROBE, or PRISMA items are detected, partial, or missing — with the official guideline language and a curated example sentence for every flagged item.

Chrome Web Store listing pending review — install button arrives once approved.

3 reporting guidelines (CONSORT 2025 · STROBE · PRISMA 2020) 7-day Pro trial — no card Local-first · no AI · no document upload
<2 secondsper scan — extracts the methods section, runs every guideline item
3 guidelinesCONSORT 2025, STROBE, PRISMA 2020 — covers ~80% of medical & biomedical journals
0 serversyour draft never leaves your browser
1 desk rejectiontypically costs weeks. The extension costs $5/month.
The problem

The methods section is where journals reject you.

A reporting-guideline gap doesn't get caught by spellcheck, isn't visible to your supervisor on a 4 AM read-through, and only surfaces when a journal reviewer scrolls down and notices.

The unaided checklist

You open the official PDF on equator-network.org, manually compare it line-by-line against your draft, and hope you didn't skim past the item that asks for the allocation concealment mechanism.

The deadline blind spot

The methods section is usually the last thing rewritten before submission — which is also when you've been staring at it longest and your eyes glaze over the most.

The desk-rejection cost

"Methods section does not adhere to CONSORT/STROBE/PRISMA reporting standards" is one of the most common desk-rejection reasons at medical journals — and it always arrives 4–6 weeks after submission.

The "I'll get to it later" debt

A bias paragraph, a sample-size justification, an interim-analysis description — each takes ten minutes to write and is forgotten until a reviewer flags it. Reviewers always do.

How it works

Scan, fix, re-scan, ship.

Pick the guideline that matches your study type. Click Scan. Edit the flagged items inline. Re-scan as you go. Copy a plain-text checklist summary for your supervisor before submission.

1

Pick the guideline

Dropdown in the popup. CONSORT 2025 for RCTs, STROBE for cohort/case-control/cross-sectional, PRISMA 2020 for systematic reviews.

2

Click Scan

The extension extracts the text under your Methods heading (or Materials and Methods, Methodology, Study design and participants) and runs every item.

3

Read the report

Three groups: detected (green), partial (amber), not detected (red). Each red item shows the official guideline language and a curated example sentence.

4

Edit and re-scan

Add the missing paragraph in your doc, click Re-scan. Items update live. Mark as addressed for the rare case the engine misses your phrasing.

7-day free trial

The trial is independent of payment

No card details, no checkout, no Stripe page during the trial. Install the extension, click a button inside the popup, and you have 7 days of full access. Decide whether to subscribe at the end — not at the start.

1

Install the extension (free)

Get Methods Section Sanity Checker from the Chrome Web Store. The first install opens a welcome tab with the trial button.

2

Click Start 7-day trial

One click. No card. No email. No Stripe page. The trial starts on that click and counts down locally in your browser.

3

Scan as many drafts as you need

All three guidelines, full curated examples, "Mark as addressed" persistence, copy-summary, plain-text paste fallback — every feature unlocked for 7 days.

4

Decide on day 7

If it saved you a desk rejection, subscribe ($5/mo or $29/yr — link below). If not, the extension still shows detected items in a free preview. Nothing auto-charges because nothing was ever entered.

Two clean paths. The trial lives entirely inside the extension and never touches Stripe. The subscription (when you're ready) goes through Stripe and unlocks Pro permanently. They're decoupled — start the trial today; pay only when you've decided it's worth it.
Three guidelines, every item

What the engine checks for

Each guideline ships as a curated keyword library covering British and American spellings, with false-positive guards for heading-only mentions, citation context, blockquotes, and explicit-negation addressing. v1 covers the three Tier-1 guidelines that account for ~80% of medical and biomedical journal submissions.

CONSORT 2025
Randomised controlled trials

22 items covering trial design, eligibility, intervention details, randomisation method, allocation concealment, blinding, statistical analysis, registration, and ethics.

Catches things like

  • Missing allocation concealment mechanism
  • Randomisation mentioned but no method (computer-generated? blocks? stratified?)
  • Sample size calculation without alpha, power, or effect size
  • Trial registration ID absent
  • Blinding stated but no description of who was blinded

The benefit: the items reviewers at NEJM, JAMA, Lancet, and BMJ flag most often — caught before you click submit.

STROBE
Cohort, case-control, cross-sectional

16 items covering study design, setting, participants, variables, data sources, bias, study size, statistical methods, confounding control, missing-data handling, and ethics.

Catches things like

  • Bias mentioned only as a heading with no body paragraph
  • Confounders adjusted for not enumerated
  • Loss-to-follow-up handling unspecified (cohort)
  • Matching criteria missing (case-control)
  • Sensitivity analyses claimed but not described

The benefit: registry-based and observational papers — the studies most exposed to STROBE compliance scrutiny — get a structural completeness check.

PRISMA 2020
Systematic reviews & meta-analyses

21 items covering eligibility, information sources, search strategy, selection process, data collection, risk-of-bias tools (RoB 2, ROBINS-I), effect measures, synthesis methods, heterogeneity, GRADE certainty, and protocol registration.

Catches things like

  • Search strategy described without database list or last-searched date
  • Risk-of-bias tool not named (RoB 2? ROBINS-I? Newcastle-Ottawa?)
  • PROSPERO / OSF registration missing
  • Two-reviewer screening claim without disagreement-resolution method
  • GRADE certainty assessment absent

The benefit: reviews submitted to BMJ, Cochrane, and JBI — the journals most strict about PRISMA — pass the structural-completeness gate before peer review starts.

Who it's for

Built for the people most exposed to reporting-guideline rejections

Methods Section Sanity Checker is not a writing assistant. It's a structural completeness checker for graduate students and early-career researchers writing first-author papers in medicine, public health, epidemiology, and biomedical sciences.

Clinical PhDs in RCTs

CONSORT 2025

You're writing up your first RCT. Reviewer 2 is going to count CONSORT items. Run the scan in Google Docs the day before submission and fix the three items you forgot.

Epidemiology & public health

STROBE

Cohort or case-control. The review board has been trained to look for STROBE compliance. Catch the bias-paragraph gap, the loss-to-follow-up note, the sensitivity-analysis section.

Systematic reviewers

PRISMA 2020

Cochrane and BMJ enforce PRISMA strictly. The 27-item checklist in the front matter looks intimidating; running the scan turns it into a five-minute fix list.

Supervisors & methodologists

All three

Pre-submission spot-check before sending student work to a journal. Plain-text paste fallback covers Word and Pages drafts that aren't in Docs or Overleaf.

Privacy & trust

Your unpublished methods section is some of the most sensitive content on your computer. We treat it that way.

  • Your draft never leaves your browser. Detection runs locally on the text the extension reads from your active tab.
  • No AI, no LLM calls, no machine learning. Pure regex over curated keyword libraries. The "AI" was used at planning time only.
  • Scoped permissions. The extension declares https://docs.google.com/document/* and https://www.overleaf.com/project/* only. It cannot run on any other domain.
  • License validation only. The only thing we send to our servers is your license key — once every 7 days for revalidation.
  • No tracking, no analytics, no telemetry, no fingerprinting in the extension itself.

Document reading: on Google Docs we call the standard ?format=txt export endpoint via your browser session — Google never sees us as a third party. On Overleaf we read the LaTeX source from the in-page CodeMirror editor. Nothing else is read or transmitted.

Pricing — when you're ready to subscribe

Two plans. Same features.

The 7-day trial is separate — start it inside the extension before you ever come here. Subscribe when you've decided it's worth it.

Monthly

$5
per month
  • All three guidelines (CONSORT 2025 · STROBE · PRISMA 2020)
  • Curated example sentences for every flagged item
  • "Mark as addressed" persistence per document
  • Copy-checklist-summary for your supervisor
  • Plain-text paste fallback (Word, Pages, Scrivener)
  • 7-day free trial — no card required
Get monthly
Mastering Research readers: use code MASTERINGRESEARCH for $14.50/year (first 100 customers).
FAQ

Frequently asked questions

How does the 7-day trial work? Do I need to enter a card?

No card. No payment. No Stripe page. The trial is fully independent of the subscription pipeline. After you install the extension from the Chrome Web Store, the welcome tab opens automatically with a Start 7-day trial button. The countdown starts on that click — locally, in your browser — and you have 7 days of full access. Nothing auto-charges because nothing was ever entered.

At the end of 7 days, the popup tells you the trial is over. If you want to keep curated example sentences and "Mark as addressed" persistence, come back to this page and click Get monthly or Get annual — that's the point you go through Stripe.

What stops someone from uninstalling and reinstalling to get a new trial?

Nothing technical, on purpose. The trial is honor-system. We don't fingerprint your browser, we don't require an account, and the only thing we send to our servers is your license key (when you have one). The trial timestamp lives in chrome.storage.sync, so it follows your Chrome profile across devices — but a determined person could clear sync data or use a different Chrome profile to reset it.

That's a deliberate trade-off. Locking down the trial would mean either browser fingerprinting or requiring an email account, both of which would break the privacy posture the rest of this page is about. The kind of person who'd cycle uninstalls weekly to dodge $5/month was never going to pay anyway, and the friction self-selects them out of our customer base.

If you genuinely need more time to evaluate before committing, email support@gradsummit.com — we'd rather extend your trial than lock you out.

Does this use AI to write my methods section?

No. The extension does not write methods sections, does not suggest text, and does not call any LLM. The detection engine is pure regex over curated keyword libraries — both British and American spellings shipped — for each item in the chosen guideline. The example sentences shown for missing items are illustrative only; they are not generated and not personalised. Your job is to write the missing paragraph in your own voice; the extension's job is to tell you which paragraphs are missing.

Which guidelines does v1 cover, and what about STARD, SPIRIT, CARE, ARRIVE?

v1 ships CONSORT 2025 (RCTs), STROBE (cohort, case-control, cross-sectional), and PRISMA 2020 (systematic reviews and meta-analyses) — together they cover roughly 80% of submissions across medical and biomedical journals. STARD (diagnostic accuracy) and SPIRIT (trial protocols) are scheduled for v1.1, ~4–6 weeks post-launch. Tier 2 guidelines — CARE, ARRIVE 2.0, CHEERS, SQUIRE, TRIPOD+AI, SRQR, COREQ, TIDieR — roll out monthly thereafter. Annual subscribers get every new guideline as it ships at no extra cost.

What does "partial" mean in the report?

Three states: detected (green) means the engine found a strong, specific match — e.g., "computer-generated random allocation sequence" satisfies CONSORT randomisation method. Partial (amber) means the topic is mentioned but the specifics are missing — e.g., the word "randomisation" appears but no method is given. Not detected (red) means no match at all. Partial also fires when the engine's false-positive guards downgrade a match — for example, if "bias" appears only inside a heading line with no body paragraph addressing it, that's partial, not green.

The extension said an item is missing, but I clearly addressed it. Why?

The keyword libraries are conservative on purpose: high precision (low false positives), accepting some false negatives (where the engine misses an unusual phrasing). Click Mark as addressed on the item — it persists per document URL and the engine treats it as detected from then on. If you find a phrasing that should obviously match, email support@gradsummit.com with the sentence and we'll fold it into the next library update.

Does it work for non-English methods sections?

Not in v1. Keyword libraries are English-only. Spanish, French, German, Mandarin, and Japanese are on the long-term roadmap but not scheduled — multilingual detection requires a curated library per language, and we'd rather ship a few well than many poorly. If you need a language we don't cover, email support@gradsummit.com; the request volume helps us prioritise.

I write in Word, not Google Docs or Overleaf. Can I still use it?

Yes. The popup has a Paste tab — paste your methods section text directly into the textarea, click Scan, get the same report. Same for Pages, Scrivener, journal portals, Notion, anywhere we don't have a direct content script. Word for the web is on the v1.2 roadmap as a first-class platform.

Why didn't the scan find my methods section?

The extractor looks for headings like Methods, Materials and Methods, Methodology, or Study design and participants. If your section is under a non-standard heading (e.g., Approach, What we did), the extractor won't find it — switch to the Paste tab as a workaround. The error message in the panel will say so explicitly.

How is this different from EQUATOR's checklist PDFs?

EQUATOR's PDFs are the ground truth. We point at them as the source-of-record (equator-network.org) and the extension's official-text fields are taken from those documents. The difference is operational: instead of opening a 5-page PDF and manually checking each item against a 1,500-word draft, you click one button and get a sorted report in two seconds.

Where is my data stored? Can I cancel any time?

Locally in your browser (IndexedDB) — your "Mark as addressed" overrides per document, and a count of your last 50 scans (item counts only, never document text). On our license server: your license key, the email you used at checkout, your subscription status. Nothing about your documents. Cancel any time — manage the subscription via the link in your activation email or email support@gradsummit.com. Refunds within 7 days of purchase, no questions asked.