A few years back, building a comp table for a healthcare acquisition was one of the most tedious things on my plate. I was Director of M&A at a NASDAQ-listed oncology platform - we were buying physician practices, doing add-ons, building out a specialty network. None of those deals were public. No press releases with disclosed multiples, no public filings, nothing you could pull from a Bloomberg terminal.
Every comp table started from scratch. I'd dig through our internal deal history, pull broker CIMs from prior processes we'd looked at, chase down transaction notes from intermediaries I'd worked with. Then came the actual work: scrubbing the data into a consistent format, calculating implied multiples (because half the time the broker's stated multiple didn't match what you got when you divided EV by EBITDA), normalizing for deal structure differences, formatting for the IC presentation. A solid comp table - one I'd actually put in front of our investment committee - could take me the better part of two days.
The mechanical work was what killed the time. Not the judgment - the judgment I could do in my head. It was the formatting, the organizing, the cross-referencing, the Excel work. The part a computer should be doing.
AI handles that part now.
I want to be careful here about what "AI builds comp tables" actually means, because the marketing on these tools outruns reality by a wide margin. Based on what I've seen, AI is genuinely useful for a specific slice of the comp table workflow: organizing raw data, calculating multiples, flagging outliers, and formatting output for presentations. What AI cannot do - and what still requires someone who's been in the room on these deals - is select the right comps, adjust for deal-specific structures, and apply the healthcare-specific nuances that actually determine whether a multiple is relevant to your transaction.
But that mechanical layer? That slice that was eating two days of my time? That's now a morning.
What the Workflow Actually Looks Like
The way I think about this now: AI is your analyst for the data work, and you're the senior doing the judgment work.
The tools that have earned a place in my workflow, based on my experience:
Claude is the workhorse. The reason is the context window - Claude's enterprise tier handles 1 million tokens, which means you can load multiple CIMs, broker notes, and your internal comp history into a single session and ask it to pull consistent fields across all of them. For private healthcare deals where your comps are all sitting in PDFs and deal notes, this matters a lot. It also has a direct Excel integration in beta that's worth knowing about.
ChatGPT with Advanced Data Analysis is the calculator. Once you have structured data, GPT-4o's code interpreter will compute your multiples, run descriptive statistics across your comp set, flag outliers by standard deviation, and produce clean output you can drop into Excel. It's also better than most people for writing the comp set commentary for your IC memo.
Perplexity handles the public research layer. If your comp set includes larger disclosed transactions - a platform deal that made the trade press, a health system acquisition that was announced - Perplexity will compile them into a table with citations faster than you can run a manual search. The limitation is that it can only surface what was publicly reported, which for private practice deals is usually just "deal announced" with no financial terms.
MEMO: These tools are advancing so fast that having to utilize various models for each task (as listed above) will quickly be a thing of the past. As a subscriber to my newsletter, you’ll be first to know!
For teams with institutional budgets, AlphaSense (which acquired Tegus for $930M and now covers 1.4 million private companies with M&A detail) and Hebbia (which KKR and Permira are using for institutional-grade VDR processing) are in a different tier. Hebbia's "Matrix" product runs the same query across hundreds of documents simultaneously - if you're processing a full data room, that capability is real. But for the individual practitioner or lean deal team, Claude and ChatGPT get you very far at $20-200/month.
According to McKinsey's 2026 M&A trends analysis, AI has made deal cycles 10-30% faster and M&A activities 20% cheaper. From what I've seen, the comp table workflow is one of the clearest examples of where that efficiency actually lands.
The Part AI Still Can't Do
Before I give you the prompts, I want to be honest about the ceiling, because I've seen people hand too much of this work to the model and end up with a comp table that looks right but isn't.
Comp selection is still yours. A GI practice acquired by a hospital system in 2024 and a GI practice acquired by a PE platform in the same year are different transactions, even if the headline numbers look similar. The strategic rationale, the pricing dynamics, the buyer's synergy math - none of that is in the data. AI will happily put both in the same comp set unless you tell it otherwise.
Private data gaps are real. No AI tool has access to the private practice transaction multiples you actually need for sub-$50M healthcare deals. The data that formed the basis of the comp work I did at the oncology platform doesn't exist in any database AI can reach. AlphaSense's private company coverage is the best you'll find in an off-the-shelf tool, and even they acknowledge limited coverage on sub-$25M add-on transactions. AI processes the data you bring. It doesn't source it.
Healthcare nuances require deal experience. A primary care group with a Medicare Advantage panel and VBC contracts might trade at 10-20x EBITDA. The fee-for-service primary care practice next door might trade at 3-5x. AI will not catch that distinction unless you explicitly flag it in the prompt. Same with ASC ownership (adds 1-3 turns in my experience), physician retention risk for key-person-dependent practices, and payor mix (practices above 70% commercial can command 40-60% higher multiples than government-payer-heavy groups).
Hebbia's own framing - taking analysts from zero to 90% and freeing them for the last mile - is the right mental model. The 90% is real. The last 10% is irreducibly human, and in a healthcare deal it's where the actual value lives.
One more thing: data security is not optional. Never upload NDA-protected CIM data to public AI tiers - ChatGPT Plus, Claude Pro, anything that isn't an enterprise or API-tier deployment with documented data handling. FINRA's 2026 oversight report now formally addresses gen AI risks in financial services. And Deloitte's analysis documented GPT-4 hallucinating financial figures in M&A contexts - always verify AI's arithmetic against your source documents.
That's the framework and the honest context. The full workflow - the three specific prompts I use for extraction, normalization/outlier analysis, and healthcare-specific annotation - is below for subscribers.
The Three Prompts That Cut My Comp Table Time by 80%
These prompts are designed for Claude (preferred for document-heavy work) or ChatGPT GPT-4o. Each assumes you are bringing your own deal data - AI is the processor, not the source.
Prompt 1: Extracting and Structuring Data from Raw CIM/Broker Notes
When to use this: You have 3-5 broker CIMs or deal summary documents in PDF or text form and need to pull consistent financial fields into a comp table shell. This is the step that used to take me half a day.
Load the documents into Claude (Enterprise or API tier if CIM data is NDA-protected) and send this prompt:
I am going to paste the financial summary sections from several healthcare M&A transaction documents.
Your job is to extract the key metrics from each and organize them into a structured comparable transactions table.
For each transaction, extract and calculate the following fields. If a field is not explicitly
stated but can be reasonably inferred from the document, calculate it and note the assumption.
If a field is missing and cannot be inferred, mark it as "NDA" (not disclosed).
Required columns:
Target Practice Name (anonymize to "Target A", "Target B" etc. if needed for confidentiality)
Specialty / Sub-Specialty
State / Metro Area
Transaction Date (or "Est. [Year]" if approximate)
Total Purchase Price($M)
LTM Revenue ($M)
LTM Adjusted EBITDA ($M) - use the broker's stated EBITDA if provided; flag any stated add-backs and their amounts
EV / LTM Adjusted EBITDA (calculate)
EV / LTM Adjusted EBITDA (calculate)
Buyer Type (PE Platform / PE Add-On / Health System / Strategic / Unknown)
# Physicians / Providers
# Locations
Key Notes (payor mix if stated, ASC ownership, earnout structure, key-person considerations)
Format the output as a markdown table. After the table, provide a separate section called
"Data Quality Flags" that lists any transactions where you made assumptions, found inconsistencies between stated revenue and EBITDA margins, or where key fields were missing.
Here are the source documents:
[PASTE DOCUMENT TEXTS HERE OR ATTACH FILES]
Why this works: The "Data Quality Flags" section is the critical piece. It forces Claude to be explicit about every assumption it made, so you know exactly what to verify against the original documents before you rely on the table. Without that, AI will silently fill gaps and you won't know what it invented.
Prompt 2: Normalizing and Computing a Comp Table with Outlier Analysis
When to use this: You have a partially built comp table in Excel and need to calculate descriptive statistics, flag outliers, and produce presentation-ready output. Use ChatGPT Advanced Data Analysis for this step - attach your Excel file or paste the table directly.
Why this works: The "Data Quality Flags" section is the critical piece. It forces Claude to be explicit about every assumption it made, so you know exactly what to verify against the original documents before you rely on the table. Without that, AI will silently fill gaps and you won't know what it invented.
I'm attaching a comp table with [N] healthcare M&A transactions. The table includes the raw
financial data I've collected. I need you to:
STEP 1 - DATA NORMALIZATION
Review all entries for consistency:
Flag any transactions where the stated EV/EBITDA multiple does not equal the Enterprise Value divided by EBITDA (i.e., check arithmetic)
Flag any transactions with EBITDA margins below 5% or above 40% - these may reflect different normalization approaches and should be noted
Identify any transactions where Revenue is listed but EBITDA is missing (cannot calculate EV/EBITDA)
STEP 2 - MULTIPLE CALCULATIONS
For the full comp set and each sub-group I define below, calculate:
Mean, Median, 25th Percentile, 75th Percentile, Min, Max for both EV/Revenue and EV/EBITDA
Present these as a summary statistics table
Sub-groups to analyze separately:
PE buyer vs. Strategic/Health System buyer
Platform transactions (EBITDA > $3M) vs. Add-on transactions (EBITDA < $3M)
Transactions 2021–2022 vs. 2023–2025 (to show market multiple compression)
[Add your own sub-groups here based on your deal context]
STEP 3 - OUTLIER ANALYSIS
Flag any individual transactions where the EV/EBITDA multiple is:
More than 1.5 standard deviations above or below the mean
Below 5x or above 20x
For each flagged outlier, note whether it should potentially be excluded from the "refined" comp set, and explain why.
STEP 4 - PRESENTATION FORMAT
Produce two outputs:
A full comp table (all transactions, sorted by EV/EBITDA descending) with the summary statistics row at the bottom - formatted for a management presentation
A "refined" comp table excluding the outliers you identified, with the same summary row
Use this formatting: multiples to one decimal place (e.g., 9.4x), dollar values in $M to
one decimal place, dates as "Q3 2024" format.
[ATTACH YOUR EXCEL FILE OR PASTE YOUR TABLE HERE]
Why this works: Breaking it into four explicit steps means you can review each stage before moving on. The subgroup analysis by buyer type and vintage is what actually makes a comp table useful for a healthcare deal - the headline median doesn't tell you much without those cuts. And the two-table output (full set plus refined set excluding outliers) is exactly what you want for an IC presentation.
For reference as you interpret the output: based on current market data from FOCUS Investment Banking and Sofer Advisors, here are current EV/EBITDA ranges by practice size and by specialty:
By Practice Size (2025 market data):
Practice EBITDA | Median Multiple | Typical Buyer |
|---|---|---|
< $500K | 4.5x | Regional groups, hospitals |
$500K - $1.5M | 6.2x | PE platforms, health systems |
$1.5M - $3M | 7.8x | PE platforms, health systems |
$3M - $5M | 9.1x | Smaller PE platform, health systems |
$5M+ | 11.3x | PE platform creation, strategics |
By Specialty (2025-2026):
Specialty | Platform Range | Add-On Range | Key Driver |
|---|---|---|---|
Primary Care (VBC) | 10-20x | 3-5x (FFS) | VBC contracts, panel attribution |
Dental / DSO | 9-11x | 5-8x | Scale, multi-location |
Behavioral Health | 8-11x | 5-8x | Tech-enabled, payor contracts |
Dermatology | 8-10x | 6-8x | Cosmetic revenue mix |
GI | 9-12x | 7-10x | Endoscopy ownership |
Cardiology | 12-15x | 8-12x | Cath lab, ancillary services |
Ophthalmology | 12-20x | 5-11x | ASC ownership, retina |
Orthopedics | 9-12x | 7-10x | ASC, procedure volume |
Oncology | 10-14x | 8-10x | Drug distribution synergies |
Note: Median healthcare services EV/EBITDA has moderated to ~11.5x in 2025, down from 14.5x in 2024, per Sofer Advisors. Use 2021-2022 vintage comps carefully - those multiples reflected a different rate environment and deal volume that is unlikely to repeat.
Prompt 3: Adjusting for Healthcare-Specific Deal Nuances
When to use this: You have a comp table built but need to annotate it for a client presentation - adding the healthcare-specific context that explains why certain comps are more or less relevant to the deal you're working on. This is the prompt that replaces the annotation session that used to take me 30-45 minutes.
I'm preparing a precedent transactions analysis for a healthcare deal and need to annotate our comp set with deal-relevance commentary.
Here is the context for the current deal (fill in your specifics):
Target specialty: [e.g., behavioral health / outpatient therapy]
Target EBITDA: [e.g., $4.2M]
Target payor mix: [e.g., 65% commercial, 25% Medicaid, 10% Medicare]
Buyer type: [e.g., PE platform]
Target structure: [e.g., multi-site, associate-driven, no ASC]
Geography: [e.g., Southeast, suburban markets]
Deal year: [2025]
Below is our comp set. For each transaction, provide a "Relevance Assessment" column using one of three ratings:
HIGH: Closely comparable to the current deal (similar specialty, size, buyer type, structure)
MEDIUM: Partially comparable but with one or more meaningful differences
LOW: Included for context only - differences likely make this an unreliable comp
For each MEDIUM or LOW rating, provide a one-sentence explanation of the key difference. Flag specifically for the following healthcare valuation factors:
Whether ASC/ancillary ownership likely inflated the multiple vs. our target (no ASC)
Whether the buyer type (strategic vs. PE) likely produced a premium or discount vs. our deal
Whether the transaction was in a significantly different reimbursement environment (pre-2023)
Whether the size differential (platform vs. add-on) makes the multiple non-comparable
After the annotated table, write a 3-sentence "Comp Set Commentary" paragraph suitable for inclusion in an IC memo or management presentation. It should explain the selected comp set, note any adjustments made, and anchor where the current deal's multiple falls relative to the most relevant comps.
[PASTE YOUR COMP TABLE HERE]
Why this works: This is the highest-value prompt of the three because it forces the AI to do qualitative comparison work using factors that actually drive multiple differences in healthcare - payor mix, ASC ownership, buyer type, vintage. The three-sentence commentary at the end is designed to drop directly into an IC memo. It won't be perfect, but it'll be 80% of the way there, and editing a draft is faster than writing from scratch.
The factors the prompt flags are the ones that matter most from my experience:
Payor mix: Practices above 70% commercial can trade at 40-60% higher multiples than government-payer-heavy groups. A comp that doesn't disclose payor mix is worth noting as a data quality gap.
ASC ownership: Based on what I've seen, ASC or ancillary ownership adds roughly 1-3 turns. A cardiology practice that owns its cath lab is not the same comp as one that doesn't, even if the EBITDA is identical.
Physician retention risk: Sofer Advisors notes a 1-2 multiple turn discount for owner-dependent practices. If the comp was a sole-practitioner sale and your deal is associate-driven, flag it.
VBC vs. fee-for-service: A primary care practice with a Medicare Advantage panel and ACO attribution can trade at 3-6x the multiple of a comparable FFS practice. These should never be in the same comp set without explicit adjustment.
A Note on Hallucination Risk
This bears repeating because the cost of getting it wrong in M&A is not trivial. Deloitte's analysis found that in financial and healthcare contexts, AI hallucinations can produce errors where "a difference of only 0.5% could, in certain situations, amount to millions of dollars."
Three rules I follow:
Always verify the arithmetic. After AI produces a comp table with calculated multiples, spot-check five transactions manually: does Enterprise Value divided by EBITDA equal the stated multiple? Have the model show its math explicitly.
Use enterprise tiers for confidential data. ChatGPT Free/Plus, Claude Free/Pro - none of these should receive NDA-protected CIM content. Claude Enterprise and OpenAI's API with appropriate data agreements are the minimum standard for client materials.
The model is a starting point, not a deliverable. Every comp table that goes in front of an IC or gets sent to a client has to have a human review pass. AI gets you from zero to 90%. The last 10% - comp selection, structure adjustments, deal-specific context - is where your expertise earns its place.
The workflow above took something that used to consume two full days and compressed it into a few focused hours. The mechanical work is handled. What remains is the judgment work - and from what I've seen, that part isn't going anywhere.
If you found this useful, subscribe to Healthcare M&AI for weekly analysis on how AI is changing the way healthcare deals get done. Practitioners only - no fluff.
