Best Practices

How to get the most reliable answers from FactScope’s fact-checking agents.

Where FactScope shines

  • Binary, evidence-backed statements. Claims framed as “did X happen / is Y approved / does Z exist” are easiest for the agents to verify.
  • Regulatory or commercial status checks. Approvals, launches, tenders, filings, and other “is it officially live yet?” questions map cleanly to the pipeline’s sourcing rules.
  • MES assertions. Anything about the existence, feasibility, scope, or stage of a technology fits the MES (existence/feasibility/context/stage) checks baked into the verdict model.
  • Time- and quantity-bounded facts. Numbers tied to a date ("by 2025 Q3, >=1,000 buses") let the temporal guard-rails work.
  • Wording audits. Use the tool to spot overstated or misleading phrasing (pilot vs. launch, research vs. commercialization).

The Simple workflow extracts up to ceil(characters / 600) claims with a hard cap of 12 per request and a soft limit of 3 before manual confirmation. Keep each request focused so high-value claims are not truncated.

When not to ask

  • Subjective judgements (better/nicer/promising) that lack objective ground truth.
  • Pure causality or counterfactuals (“entirely because…”, “would have…”).
  • Forward-looking predictions (“will definitely next year”).
  • Anything that relies on private or internal data; the system only cites public sources, so the best possible verdict is usually unclear.
  • Content likely to fail safety screening; omni-moderation-latest runs before and after every request, and blocked inputs abort the job with no progress.

How to ask effectively

Prompt blueprint

Actor / object: clear entity names (company, regulator, product, drug, project)
Action / status: what happened (approved, launched, shipped, produced, filed)
Time & scope: date/quarter, geography, indication, model, batch
Evidence expectation: specify the preferred source tier (regulator, academic, tender, etc.)
Optional hints: locale ("zh" or "en"), known aliases, or on-chain IDs

Strong question examples

  • “Did Drug X receive FDA approval for indication Y in 2025? Cite the official filing.”
  • “By 2025 Q3, did City A deploy >=1,000 hydrogen buses? Prefer municipal announcements or tender records.”
  • “Has Company B reached mass production of Product C in mainland China? Point to regulatory filings or earnings disclosures.”

Rewrite weak prompts

  • “Is X technology mature?” → “Does X have live commercial deployments? In which industry/country, and since when?”
  • “Is Y better than Z?” → “On public benchmark K, does Y outperform Z? By how much, and which evaluation report backs it?”

Extra pipeline-aware tips

  • Break very long briefs into themed chunks (~2,000 characters) so the adaptive claim cap doesn’t trim important statements.
  • If you need more than three verified claims, expect a confirmation screen—approve only the ones worth the credits (1 credit per claim is reserved up front and refunded if the run fails).
  • Set locale when mixing Chinese and English; it guides claim extraction and the recommended phrasing while still generating bilingual search queries.
  • Each claim triggers up to three search queries; include synonyms, variants, or time filters directly in your statement to reduce wasted queries.
  • Concurrency is limited (default 2 claims in flight). Large batches finish faster if you send multiple smaller requests instead of one giant paste.
  • Order claims by business value so the most critical ones are processed first.
  • Add English aliases or well-known abbreviations directly in the prompt even when the narrative is in Chinese; it saves translation hops.
  • If you expect certain source tiers (regulators, academic journals, tenders), state that preference explicitly so retrieval leans toward them.
  • Always anchor the timeframe (“as of 2025-09-30” or “during 2024 Q4”) to prevent otherwise relevant evidence from being marked outdated.

Evidence and freshness rules

  • Source tiers: regulatory/government/peer-reviewed > institutional (universities, state labs) > high-tier media > other media/blogs. Top-tier sources lift the confidence score because of weighting in SOURCE_TYPE_WEIGHT.
  • Temporal guard: evidence published before the claimed timeframe is downranked or flagged as outdated. Always mention the date you care about.
  • Multi-source preference: the verdict model caps confidence for single-source evidence. Ask for at least two independent sources if the topic is sensitive.
  • Web search window: every search query returns a small candidate pool (default raw limit 12, top-k display 3). Narrow scopes so the best snippets make the cut.
  • Failure refunds: blocked or failed requests automatically refund reserved credits; partial runs that end in unclear still count because the pipeline completed.

Reading verdict labels

LabelWhat it meansHow to react
supportedEvidence is consistent across strong sources.Safe to quote as-is; still cite the sources.
supported_with_qualifierMostly true but only under listed conditions (region, version, timeframe).Rephrase with the qualifier and consider rechecking later.
partialOnly part of the statement holds or critical details are missing.Split the claim and restate the portion that is actually confirmed.
contradictedAuthoritative sources disagree with the statement.Treat as disproven unless new evidence appears.
unclearNot enough open evidence or the wording is too vague.Provide more context or referenceable materials.
misleading_phrasingStatement mixes stages (research vs. launch) or overstates scope.Use the recommended phrasing returned in the report.

Turn report outputs into actions

  • Treat the “recommended phrasing” section as a ready-to-use rewrite whenever qualifiers or wording risks appear—lift it directly into your copy to prevent misinterpretation.
  • Convert “missing evidence” bullets into follow-up tasks; once you gather the filings or papers they request, rerun the claim to move from partial/unclear to supported.

Writing claims for higher support rates

  • Name entities precisely: full company/legal names, drug INNs, product SKUs, regulation IDs.
  • Add explicit dates (“as of 2025-09-30”) or quarters; the temporal checker needs this to compare evidence timelines.
  • Bound the geography, indication, model, batch, or version to avoid scope creep.
  • Mention any evidence leads you already have (official domain, journal title, announcement name).
  • Avoid marketing phrases like “rolled out nationwide” or “industry-leading” unless you define the metric.
  • Flag whether you expect regulatory, academic, or media evidence so the reranker can favor that tier.
  • If you rely on internal data, attach a redacted public reference; otherwise expect unclear.

Common pitfalls and fixes

PitfallResultFix
Only linking press releasesVerdict downgraded because no primary source.Ask for regulator filings, patents, or audited reports.
Fuzzy time (“recently”, “this year”)Temporal check marks missing and lowers confidence.Provide an absolute date or quarter.
Equating pilots with full launchOften flagged as misleading_phrasing.Specify the stage: pilot/POC vs. commercial deployment.
Ignoring regional differencesEvidence may contradict because the wrong regulator is cited.Name the jurisdiction (“in EU”, “in mainland China”).
Stuffing 10+ claims in one pasteAdaptive cap drops later claims; soft limit forces manual triage.Split into batches of ≤3 priority claims per run.
Mixing languages without locale hintsQuery builder may spend tokens normalizing the statement.Set locale or keep each request monolingual.
Requesting non-public metricsVerdict returns unclear.Reframe toward outcomes with public traces (approvals, filings, tenders).

Copy-ready examples

  1. “Verify: Company C listed Product D in the EU on 2025-06. Needed evidence: regulator or exchange filing, listing country, model/version, dated timeline.”
  2. “Fact-check: ‘Project E deployed >=500 units by 2025 Q3.’ Prioritize government releases or annual reports over blogs.”
  3. “Stage check: Technology F hits metric G under ambient conditions. Cite peer-reviewed papers or university/government reports with experiment details and publish dates.”

Feel free to copy, tweak the placeholders, and paste straight into the workbench.

Scenario-specific tips

  • Compliance / legal. Phrase questions as “Was it approved / did the notice take effect?” and request document IDs or docket links.
  • Product / GTM. Emphasize “Is it commercial / in mass production / deployed with customers?” and name the buyer segment or region.
  • Research / tech. Ask for experiment settings, sample size, benchmarks, replication materials, and direct links to papers or datasets.

Following these practices keeps the agents focused, reduces wasted credits, and maximizes the odds of receiving supported verdicts with regulator-grade citations.