Governance-First Clinical Research Intelligence

Evidence,
rigorously
assembled.

MetaSearch Knowledge Hub is a structured research intelligence platform where AI-assisted synthesis meets mandatory human validation — producing publication-grade outputs for systematic reviews, meta-analyses, and original manuscripts.

6 Integrated Modules
5-Gate Validation Workflow
100% Human-Approved Outputs
01
AI is Assistive, Never Autonomous

Every AI-generated output is a draft. No analysis, no narrative, no statistical conclusion advances in the pipeline without explicit human validation at a designated gate.

02
Full Auditability, Always

Every action — every AI prompt, every approval, every export — is timestamped, attributed, and immutably logged. Reproducibility is not a feature; it is a structural guarantee.

03
Publication-Grade by Design

The platform is built around ICMJE authorship standards, PRISMA 2020, CONSORT, STROBE, and SPIRIT reporting frameworks. Compliance is embedded, not retrofitted.

Six integrated modules.
One coherent workflow.

From literature ingestion to submission pack generation, every stage of the research pipeline is covered — and every stage enforces the same governance standard.

📚
Evidence Library

Multi-database retrieval, deduplication, dual-reviewer screening, and PRISMA 2020 flowchart generation. Risk of bias tools built in.

📊
Dataset & Analysis Engine

Pseudonymised data ingestion, SAP template builder, AI-assisted code generation (R/Python), meta-analytic pooling, and forest plot generation.

✍️
Manuscript Workspace

IMRaD-structured editor with journal-specific formatting, reporting checklist overlays, version control, and co-author permission management.

Validation Workflow Engine

Five mandatory gates — AI draft, statistician review, scientific writer review, PI approval, export. Architecturally enforced; no bypass is possible.

🎯
Journal Intelligence System

AI-assisted journal matching, scope fit scoring, impact factor lookup, author instruction parsing, and cover letter generation — all flagged as AI-draft.

🔐
Audit & Governance Layer

Immutable action logs, ORCID-authenticated validation records, role change tracking, and GDPR/UAE PDPL-aligned data governance.

Every output passes
five mandatory gates.

The validation pipeline is non-negotiable and architecturally enforced. AI does not submit. AI does not approve. AI drafts — humans decide.

01
AI Draft Generated

AI agent produces initial output — literature synthesis, statistical narrative, or manuscript section.

AI Layer
02
Statistician Review

Credentialled statistician reviews all quantitative outputs. Approval logged with ORCID and timestamp.

Human Required
03
Scientific Writer Review

Narrative quality, reporting compliance, and scientific rigour assessed independently.

Human Required
04
PI Final Approval

Principal Investigator provides final authorisation. Only after this step is export unlocked.

Mandatory
05
Export / Submission Pack

Validated manuscript, cover letter, and supplementary files packaged for submission. Audit trail complete.

Human Action Only

Join our inaugural research cohort.

We are onboarding a select group of academic research teams for the pilot phase. Priority is given to systematic review and meta-analysis projects with clear PICO frameworks and institutional affiliation.

Three-tier, governance-first architecture.

The platform separates the public institutional layer, the authenticated research workspace, and the sandboxed AI agent layer. AI outputs can never reach the export stage without traversing all human validation gates.

Layer 01 — Public

metasearchhub.com

Static institutional website. No access to research data. Provides governance documentation, collaboration requests, advisory board profiles, and AI transparency statement. Serves as the credibility interface for institutional partners and funders.

Layer 02 — Application

app.metasearchhub.com

Authenticated research workspace. ORCID OAuth 2.0 access control. Role-based permissions (Researcher, Statistician, Scientific Writer, PI, Admin). Houses all six platform modules. Every action is logged to the immutable audit trail.

Layer 03 — AI Agent

MetaSearch AI

Scoped, logged, validation-gated AI engine. Writes only to draft zones. Cannot access export functions. Every prompt and output is versioned and logged with model identifier and timestamp. Operates under strict safety constraints defined in the AI Transparency Statement.

Authentication

ORCID OAuth 2.0

All researcher identities are authenticated via ORCID — the internationally recognised persistent identifier for researchers. This binds every validation action to a verified academic identity, supporting post-publication audit if required.

What each module actually does.

📚
Evidence Library

Multi-database search (PubMed, Embase, Cochrane, Scopus, grey literature). Automated deduplication by DOI, PMID, and title similarity. Dual-reviewer screening interface with conflict detection. Full-text retrieval queue. PRISMA 2020 flowchart auto-generated from screening records. RoB 2, ROBINS-I, NOS templates integrated.

📊
Dataset & Analysis Engine

Pseudonymised dataset upload (CSV, SPSS, STATA, R). Data dictionary generation. SAP template builder. AI-assisted R/Python code generation — draft-flagged, watermarked pending statistician validation. Fixed and random effects pooling, I² estimation, Cochran's Q, funnel plot asymmetry testing.

✍️
Manuscript Workspace

Section-by-section IMRaD editor. AI draft generation per section — gated to draft layer. PRISMA, CONSORT, STROBE, SPIRIT, CARE checklist overlays. Vancouver, AMA, APA, journal-specific reference formatting. Version control with diff tracking. Submission pack generator (manuscript + cover letter + supplementary files).

Validation Workflow Engine

Five-step blocking gate architecture enforced at API level, not merely UI level. Each gate requires authenticated completion before the next stage is unlocked. Validation actions are signed with ORCID, timestamped, and written to the immutable audit log. No administrative override is possible without generating an exception record.

🎯
Journal Intelligence System

Journal matching by study design, subject area, and open-access preference. CiteScore, H-index, and acceptance rate data. Scope fit scoring (AI-generated; PI-confirmed). Author instructions parsing with format mismatch flagging. Cover letter template generation. All outputs explicitly labelled as AI-draft recommendations.

🔐
Audit & Governance Layer

Immutable timestamped log for every user action. ORCID-linked validation records. AI action log (prompt, output, model version, timestamp). Role change log. Export log. Governance dashboard for Super-Admin. GDPR and UAE PDPL data retention enforcement. Breach notification workflow embedded.

Ready to explore the platform?

Access is by application during our pilot phase. We review each request individually to ensure alignment with platform governance standards.

Seven non-negotiable governance rules.

Principle Requirement Enforcement Mechanism
AI Assistive Role AI outputs are always drafts. No AI output may be presented as final without human validation. API-level blocking gates; draft watermarks on all AI outputs
Statistician Validation All statistical outputs require review and sign-off by a credentialled statistician. Mandatory Gate 2; ORCID-linked approval record
Scientific Narrative Review All manuscript sections require independent scientific writer review before PI approval is enabled. Mandatory Gate 3; sequential gate logic
PI Final Authority The Principal Investigator holds exclusive final approval authority. This role cannot be delegated in the system. Role-based access control; Gate 4 PI-only authentication
No Automated Submission The platform never submits manuscripts to journals. Export is a human-initiated action only. Export function requires active human session and Gate 4 completion
Full Audit Trail Every action by every user and every AI agent is permanently logged with timestamp and identity. Immutable append-only audit log; cryptographic hash chain
Authenticated Identity All validation actions are linked to ORCID-verified researcher identities. ORCID OAuth 2.0 mandatory for all validation roles

Role-based access at every layer.

Role Capabilities Gate Authority
Researcher Create projects, upload data, initiate AI drafts, view workspace None — cannot approve any gate
Statistician Review and annotate analysis outputs, approve statistical section Gate 2 only
Scientific Writer Edit manuscript sections, review reporting compliance, approve narrative Gate 3 only
Principal Investigator Full workspace access, final approval authority, export initiation Gate 4 (final); export unlock
Super-Admin (Founder) Full platform control, user management, audit log access, role assignment Platform governance only; cannot override validation gates

Built on established research standards.

Platform outputs are structured to comply with the reporting requirements of major clinical research standards. Compliance is embedded in templates and checklists, not left to researcher recall.

ICMJE Authorship PRISMA 2020 CONSORT 2010 STROBE SPIRIT 2013 CARE RoB 2 ROBINS-I Newcastle-Ottawa Scale GRADE Vancouver References GDPR UAE PDPL ORCID

The problem with unvalidated AI.

AI language models can produce statistically plausible but factually incorrect outputs — a phenomenon well documented in the biomedical literature. In clinical research, a hallucinated reference, an incorrect effect size, or a misconstrued methodology can survive peer review and enter the scientific record. The MetaSearch validation framework exists to prevent this.

Every AI-generated output is explicitly watermarked as a draft. The workflow engine prevents any draft from being exported, submitted, or treated as final until each of the five validation gates has been completed by a human with appropriate credentials and authority. The gates are sequential and blocking: Gate 3 cannot open until Gate 2 is complete; Gate 4 cannot open until Gate 3 is complete. This is not a policy — it is code.

Each gate. In detail.

01

AI Draft Generation

AI Layer

The MetaSearch AI agent generates an initial output — this may be a literature synthesis, a statistical narrative, a manuscript section, or a journal recommendation. The output is logged with model version, prompt hash, and generation timestamp. It is saved exclusively to the project's draft zone and is watermarked DRAFT — AI GENERATED — PENDING HUMAN VALIDATION at every viewing surface.

What the AI cannot do at this stage: access validated zones, export data, or send any communication outside the platform.

02

Statistician Review

Human Required ORCID Verified

A credentialled statistician reviews all quantitative outputs: effect sizes, confidence intervals, heterogeneity statistics, forest plots, sensitivity analyses. The statistician can annotate, request revision, or approve. Approval is a signed action — linked to the statistician's ORCID, timestamped, and written to the audit log. Revision requests revert Gate 1 and require a new AI draft cycle.

This gate blocks even if the PI wishes to proceed. Authority over statistical correctness is vested in the statistician, not the PI.

03

Scientific Writer Review

Human Required

An independent scientific writer assesses narrative quality, logical coherence, reporting standard compliance (PRISMA, CONSORT, STROBE, SPIRIT as applicable), reference accuracy, and language appropriateness for the target journal. This review is explicitly independent of the statistician review and cannot be performed by the same individual. Approval is logged identically.

The scientific writer is not a copyeditor. This role carries methodological assessment responsibility for the narrative layer of the manuscript.

04

Principal Investigator Final Approval

Mandatory Non-Delegable

The PI reviews the complete, validated manuscript — with full access to the audit trail showing all prior review actions. PI approval is a formal declaration of responsibility for the work. This role cannot be delegated to another user within the platform. Upon PI approval, Gate 5 (export) is unlocked exclusively for that session and that project version.

Any subsequent material revision resets the gate sequence from the relevant point of change.

05

Export & Submission Pack

Human Action Only

The export function is unlocked following PI approval. The platform generates a structured submission pack — final manuscript, cover letter, supplementary files, PRISMA flowchart (where applicable), and the complete audit report. The platform does not submit to journals. Submission is always a deliberate human action, taken outside the platform using the generated materials.

The audit report included in the export documents every AI interaction, every validation gate, every approver, and every timestamp — available for disclosure upon journal or ethics committee request.

Expertise across methodology, medicine, and ethics.

The Advisory Board is constituted across four domains: clinical research methodology, biomedical informatics, research ethics, and clinical domain expertise. Board members hold no financial interest in the platform and provide independent guidance on a voluntary academic basis.

SR
Advisory Position
Systematic Review Methodology

Vacancy — Currently recruiting a senior methodologist with Cochrane experience and expertise in PRISMA 2020 and evidence synthesis methodology.

CS
Advisory Position
Biomedical Statistics

Vacancy — Seeking a clinical biostatistician with expertise in meta-analytic methods, Bayesian approaches, and AI-assisted statistical validation.

RE
Advisory Position
Research Ethics & AI Governance

Vacancy — Seeking a research ethicist or bioethicist with expertise in AI in clinical research, data governance, and responsible innovation frameworks.

CE
Advisory Position
Clinical Endocrinology & Metabolism

Vacancy — Seeking a senior academic endocrinologist with Q1 publication record and experience in large-scale observational or interventional diabetes research.

HI
Advisory Position
Health Informatics & Data Science

Vacancy — Seeking a health informatician with expertise in clinical data standards (HL7 FHIR, OMOP CDM), pseudonymisation, and research data management.

JR
Advisory Position
Journal Editing & Publishing

Vacancy — Seeking a current or former editor of a Q1 clinical journal with expertise in peer review standards, reporting guidelines, and publication ethics.

📩

Advisory Board nominations are open. If you are a senior academic researcher, methodologist, or bioethicist with relevant expertise and an interest in responsible AI in clinical research, we welcome expressions of interest. Advisory roles are unpaid academic positions with a commitment of approximately four hours per year. Please contact research@metasearchhub.com with your ORCID and a brief statement of interest.

What the Advisory Board does.

ResponsibilityFrequencyOutput
Governance framework review Annual Written governance assessment report
AI capability oversight On material change Approval or amendment recommendation
Validation framework assessment Biannual Gap analysis and improvement recommendations
Conflict of interest declarations Annual Published COI statements on this website
Pilot project scientific review Per pilot project Protocol endorsement or revision request

Three levels of institutional engagement.

Tier 01
Pilot Researcher

Individual academic clinicians or postgraduate researchers undertaking a systematic review or meta-analysis with a defined PICO framework and institutional affiliation.

  • Full platform access for one project
  • AI-assisted evidence synthesis
  • Five-gate validation workflow
  • PRISMA-compliant output generation
  • Pilot period: access by application
Tier 03
Institutional Partner

Academic medical centres, clinical trial units, or research institutes seeking platform integration, custom validation workflow configuration, and institutional branding within the environment.

  • Custom deployment options
  • Institutional ORCID integration
  • Custom governance documentation
  • API access for institutional systems
  • Dedicated support and onboarding

What we are prioritising in the pilot phase.

During the inaugural pilot, we are specifically seeking projects that will stress-test the platform's governance architecture and generate outputs suitable for Q1 journal submission.

Priority Type 01

Systematic Reviews & Meta-Analyses

Projects with a registered PROSPERO protocol (or willing to register), a defined PICO/PICOS framework, and multi-database search strategy. Preference for clinical topics in endocrinology, metabolism, cardiovascular medicine, or infectious disease.

Priority Type 02

Original Clinical Research Manuscripts

Observational studies, cross-sectional analyses, or secondary data analyses with pseudonymised datasets ready for analysis support. Particularly welcome: population-based studies from underrepresented regions, including Africa and the Middle East.

Priority Type 03

Narrative Reviews with Systematic Search

Structured narrative reviews where a systematic search strategy is employed, even if formal meta-analysis is not conducted. Must include defined inclusion/exclusion criteria and reporting against a recognised framework.

Priority Type 04

Study Protocols & SAPs

Prospective study protocols intended for submission to BMJ Open, Trials, or similar protocol-publishing journals, alongside formal statistical analysis plans requiring methodological validation support.

Get in touch.

General Enquiries info@metasearchhub.com
Research Collaboration research@metasearchhub.com
Platform & Technical admin@metasearchhub.com
Advisory Board research@metasearchhub.com
Response Time We aim to respond to all enquiries within 3 working days. Complex collaboration requests may require additional time for appropriate review.

MetaSearch Knowledge Hub operates from the United Arab Emirates. Communications are subject to the UAE PDPL and, where applicable, GDPR. We do not share contact details with third parties.

By submitting this form, you consent to MetaSearch Knowledge Hub storing your contact details solely for the purpose of responding to your enquiry, in accordance with our .

ℹ️

Pilot Phase Notice: The platform is currently in its inaugural pilot phase. Access is restricted to research teams with active projects, institutional affiliation, and a designated Principal Investigator with ORCID registration. We anticipate opening broader access following the successful completion of the pilot cohort. Submission of an application does not guarantee access — we will contact all applicants within 10 working days of receipt.

Are you eligible?

CriterionRequirement
Institutional Affiliation Applicant must be affiliated with a recognised academic institution, teaching hospital, or research institute. PhD students require a named supervising PI.
ORCID Registration All named team members (PI, Statistician, Scientific Writer) must hold active ORCID iDs. Registration is free at orcid.org.
Project Readiness Applicants should have a defined project — either a systematic review with PROSPERO registration (or intent to register), or an original manuscript with available dataset.
Team Completeness The application must name a PI, a statistician, and a scientific writer. These need not all be from the same institution.
Ethics Clearance Where applicable (original datasets), institutional ethics approval must be in place or confirmed as not required for the specific dataset.

Apply for pilot access.

Applications are reviewed within 10 working days. All data submitted through this form is handled in accordance with our . We will contact you using the email address provided above.

Version 1.0 — Effective from platform launch. Next review: 12 months from launch, or upon material change to AI capability.

What MetaSearch AI is

MetaSearch AI is a large language model-based assistant integrated into the platform's research workflow modules. It is designed to assist researchers in drafting, summarising, and structuring research outputs — not to make scientific judgements, validate data, or generate authoritative conclusions. It is a writing and synthesis aid, operating under strict functional constraints defined by the platform's governance architecture.

What MetaSearch AI is not

MetaSearch AI is not a research expert, a statistician, a peer reviewer, or an author. It cannot verify the accuracy of data it has not been given. It cannot assess the clinical significance of findings. It cannot determine whether a methodology is appropriate for a given research question. These judgements require human expertise and are made exclusively by the credentialled researchers using the platform.

Hallucination and uncertainty

AI language models can produce outputs that are grammatically fluent, structurally plausible, and factually incorrect — including fabricated references, miscited statistics, and misattributed findings. MetaSearch AI is prompted to flag uncertainty and to avoid generating references it cannot verify. However, the primary safeguard against AI-generated errors is the mandatory validation workflow: no AI output reaches an exported document without statistician and scientific writer review, followed by PI approval.

Citation and reference handling

MetaSearch AI does not independently search live databases. It works within the evidence base uploaded to the Evidence Library by the research team. When generating reference lists or in-text citations, it draws from verified, researcher-provided sources. Any reference generated by AI is flagged for verification during the scientific writer review gate. The platform does not permit AI-generated references to appear in exported manuscripts unless explicitly confirmed by a human reviewer.

Logging and auditability

Every AI interaction within the platform is logged: the prompt submitted, the model version used, the output generated, and the timestamp. These logs are accessible to the PI and Super-Admin for the duration of the project and for a minimum of five years thereafter. Upon export, the submission pack includes an AI interaction audit report, enabling full disclosure to journals or ethics committees upon request.

Model identity and version control

The specific AI model(s) powering MetaSearch AI will be disclosed in this statement and in the platform's technical documentation. Model versions are frozen at project initiation — a project will not encounter a different model mid-workflow without explicit notification and PI consent. Changes to the underlying model are treated as a material change requiring advisory board notification.

Data privacy

Research data uploaded to the platform is used solely within the platform's secure environment. Data is not used to train the underlying AI model. Pseudonymised datasets are not transmitted to external AI service endpoints in identifiable form. All AI processing complies with the data processing agreements in force between MetaSearch Knowledge Hub and its AI infrastructure providers.

ICMJE authorship and AI disclosure

MetaSearch AI cannot be listed as an author on any manuscript produced using the platform, in accordance with ICMJE guidelines and the editorial policies of all major clinical journals. The use of AI assistance in manuscript preparation must be disclosed in the Methods section of any submitted manuscript, using language that accurately reflects the scope of AI involvement. The platform provides a standardised disclosure statement for this purpose, which is included in the submission pack.

Questions regarding AI transparency, capability, or governance may be directed to research@metasearchhub.com. We are committed to responding substantively to all AI transparency enquiries within 10 working days.

1. Data Controller

MetaSearch Knowledge Hub (operating at metasearchhub.com and metasearchhub.org) is the data controller for personal data collected through this website and the associated research platform. Contact: admin@metasearchhub.com.

2. Data We Collect

We collect the following categories of data:

  • Identity data: Name, institutional affiliation, ORCID iD, academic title
  • Contact data: Institutional email address
  • Research data: Uploaded datasets (pseudonymised), literature references, manuscript drafts, analysis outputs
  • Usage data: Login records, platform actions, audit log entries, AI interaction logs
  • Communication data: Enquiries submitted via contact forms

We do not collect payment information, patient-identifiable data, or data from individuals under the age of 18.

3. Legal Basis for Processing

We process personal data under the following legal bases: (a) performance of a contract or service agreement — for registered platform users; (b) legitimate interests — for audit logging and platform security; (c) consent — for contact form submissions and communications. Where consent is the basis, you may withdraw it at any time by contacting admin@metasearchhub.com.

4. How We Use Your Data

  • To provide and maintain the research platform
  • To authenticate user identity via ORCID OAuth 2.0
  • To maintain the immutable audit trail required by the governance framework
  • To communicate with you regarding your application, project, or enquiry
  • To comply with legal obligations under UAE PDPL and, where applicable, GDPR

5. Data Sharing

We do not sell, rent, or trade personal data. We share data only with: (a) AI infrastructure providers bound by data processing agreements that prohibit training on your data; (b) ORCID, solely for identity verification; (c) legal authorities, where required by law. All third-party processors are contractually bound to equivalent data protection standards.

6. Data Retention

Platform audit logs are retained for a minimum of five years from project completion, as required for academic integrity purposes. Contact form data is retained for 12 months from last contact. You may request deletion of personal data not subject to mandatory retention by contacting admin@metasearchhub.com.

7. Your Rights

Subject to applicable law, you have the right to: access the personal data we hold about you; correct inaccurate data; request deletion (subject to mandatory retention requirements); object to processing; request data portability. Submit requests to admin@metasearchhub.com. We will respond within 30 days.

8. Security

We implement appropriate technical and organisational measures to protect your data, including encryption in transit and at rest, ORCID-authenticated access controls, and regular security review. No system is perfectly secure; we will notify you in the event of a data breach affecting your personal data within the timeframes required by applicable law.

9. Changes to This Policy

We will notify registered users of material changes to this policy by email. The current version is always available at this URL.

1. Acceptance of Terms

By accessing metasearchhub.com or app.metasearchhub.com, you confirm that you have read, understood, and agree to these Terms of Use. If you do not agree, you must not access the platform.

2. Permitted Use

The platform may be used solely for legitimate academic research purposes by authorised users who have completed the access application process. Commercial use, resale, or sublicensing of platform outputs is prohibited without explicit written permission.

3. Research Integrity Obligations

Users are bound by the following obligations: (a) all data uploaded to the platform must be legally obtained and, where patient data is involved, appropriately consented and pseudonymised; (b) all ORCID iDs provided must belong to the individual named; (c) validation actions — particularly PI final approval — may not be performed by anyone other than the named individual; (d) any publication arising from platform use must accurately disclose the use of AI assistance in accordance with target journal policies and ICMJE guidelines.

4. Prohibited Conduct

  • Uploading identifiable patient data in breach of applicable ethics approvals
  • Misrepresenting authorship, affiliation, or ORCID identity
  • Attempting to circumvent or disable validation gates
  • Using the platform to generate outputs for fraudulent publication
  • Sharing access credentials with unauthorised individuals
  • Reverse engineering or extracting platform code or architecture

5. AI-Generated Content

Users acknowledge that AI-generated outputs within the platform are drafts subject to human validation. MetaSearch Knowledge Hub accepts no liability for the accuracy, completeness, or publication suitability of AI-generated content. Responsibility for the scientific integrity of all outputs rests exclusively with the PI and named co-authors.

6. Intellectual Property

Platform outputs — including manuscripts, analysis results, and related research materials — belong to the research team that generated them. MetaSearch Knowledge Hub claims no intellectual property rights over research outputs. Platform software, design, and infrastructure remain the exclusive property of MetaSearch Knowledge Hub.

7. Limitation of Liability

MetaSearch Knowledge Hub provides this platform on an "as is" basis and does not warrant that it will be error-free, uninterrupted, or fit for any particular purpose. To the maximum extent permitted by law, we exclude liability for any loss or damage arising from use of the platform, including academic, professional, or reputational consequences of publication decisions made on the basis of platform outputs.

8. Termination

We reserve the right to suspend or terminate access without notice in cases of breach of these Terms, suspected research misconduct, or platform security concerns. Users may request termination of their account at any time; mandatory audit logs will be retained in accordance with the Privacy Policy.

9. Governing Law

These Terms are governed by the laws of the United Arab Emirates. Any disputes shall be subject to the exclusive jurisdiction of the courts of the UAE, unless otherwise required by applicable international law.

10. Contact for Legal Matters

Legal correspondence should be directed to admin@metasearchhub.com, marked clearly as "Legal Notice".