The Methodology
Between March 15 and April 5, 2026, we ran a simple test. We opened ChatGPT and asked: "Who is the best dentist in [city]?" in 50 US cities. We chose cities across different population sizes: 15 major metros (500k+), 20 mid-size cities (100k to 500k), and 15 smaller cities (50k to 100k).
For each city, we recorded:
- Every practice ChatGPT recommended (typically 2 to 5 per query)
- Whether the recommendation included specific details (services, insurance, specialties)
- Each recommended practice's Foursquare listing status (claimed, complete, categories)
- Their Google Business Profile completeness
- Their Yelp presence and rating
- Their website schema markup (present or absent)
- Their Google review count and rating
We then compared the recommended practices against a random sample of 5 non-recommended practices in each city (250 total) to identify the differentiating factors.
The Key Findings
Finding 1: Only 1.3% of Dentists Get Recommended
ChatGPT recommended an average of 3.2 practices per city. Across our 50 cities, we recorded 161 unique practice recommendations. Given the thousands of dental practices in these markets, roughly 1.3% are being recommended. The other 98.7% do not appear in ChatGPT's response at all. There is no "page 2." You are recommended or you are not.
Finding 2: Foursquare Presence Was the Strongest Predictor
Of the 161 recommended practices, 148 (92%) had a claimed, complete Foursquare listing. Among the 250 non-recommended practices we sampled, only 34 (14%) had a claimed Foursquare listing.
This was the single strongest predictor we found. A claimed Foursquare listing with complete information was present in almost every recommendation and absent from almost every non-recommendation. For context on why Foursquare matters so much, see our article on the Foursquare and ChatGPT data pipeline.
Finding 3: Directory Consistency Mattered More Than Review Count
88% of recommended practices had consistent NAP (name, address, phone) data across at least 3 major directories (Google, Foursquare, Yelp). Only 31% of non-recommended practices had this consistency. Meanwhile, review count showed almost no correlation. Several recommended practices had fewer than 100 Google reviews, while many non-recommended practices had 300+. Quantity is not the signal. Data consistency is.
Finding 4: Specific Categories Beat Generic Ones
76% of recommended practices had specific dental categories beyond just "Dentist" on their Foursquare and Google listings. Categories like "Cosmetic Dentist," "Pediatric Dentist," "Emergency Dental Service," and "Orthodontist" (where applicable). Only 22% of non-recommended practices had secondary categories. This makes sense: when someone asks for a "cosmetic dentist," the AI needs that category to match.
Finding 5: Schema Markup Was Present But Not Universal
41% of recommended practices had LocalBusiness or Dentist schema markup on their website. Only 8% of non-recommended practices had schema markup. This suggests schema is a contributing factor but not the primary one. It amplifies the effect of strong directory data rather than replacing it. Read more about the role of schema in our schema markup guide.
Finding 6: Review Content Mattered, Not Stars
We analyzed the Google reviews of recommended vs non-recommended practices. The average star rating was nearly identical (4.7 vs 4.6). The difference was in the content. Recommended practices had reviews that mentioned specific procedures, insurance acceptance, and location details 3x more often than non-recommended practices. A review saying "excellent Invisalign results at the downtown location, they filed my Delta Dental claim directly" is worth more to AI than 50 reviews saying "great dentist!"
Is your practice in the 1.3% or the 98.7%?
We will test your practice across every AI platform and tell you exactly where you stand.
Book Your Free AI Visibility AuditWhat Non-Recommended Practices Were Missing
The 250 non-recommended practices we analyzed had clear, consistent gaps:
- 86% had no claimed Foursquare listing. Some had auto-generated listings with incomplete or incorrect data. Most had no listing at all.
- 69% had NAP inconsistencies across directories. Different phone numbers, address format variations, or outdated business names on at least one platform.
- 92% had no schema markup on their website. Their sites looked fine to humans but were opaque to AI.
- 78% had only generic Foursquare/Google categories.Just "Dentist" with no secondary categories for specific services.
- Average review detail score: 2.1 out of 10. Reviews were short, generic, and lacked service-specific keywords. The recommended group averaged 6.8 out of 10.
The encouraging part: every one of these gaps is fixable. None of them require a new website, paid advertising, or years of effort. Most practices can address all five gaps within 30 days. Our dental-specific AI optimization guide covers the exact steps.
City Size Analysis
We found interesting patterns across city sizes:
Major metros (500k+): ChatGPT recommended an average of 3.8 practices. Competition was highest, but so was the opportunity. Many established practices in these markets had neglected their Foursquare presence entirely. A new competitor with complete directory data could displace a long-standing market leader.
Mid-size cities (100k to 500k): Average of 3.1 recommendations. This was the sweet spot for opportunity. Enough search volume to matter, but competition for AI visibility was lower than in major metros. Several of these markets had only 1 or 2 practices with complete data.
Smaller cities (50k to 100k):Average of 2.7 recommendations. In 4 cities, ChatGPT struggled to recommend any practice and gave hedged responses like "I would suggest checking Google for local dentists in [city]." This means the first practice in these markets to optimize will own the entire AI recommendation.
What This Means for Your Practice
The data is clear. AI search recommendations are not based on who has the best website, the most reviews, or the longest history. They are based on data completeness and consistency across the specific platforms AI uses to make decisions.
Foursquare is the foundation. Directory consistency is the amplifier. Specific categories, detailed reviews, and schema markup are the differentiators. Miss any one of these and you are in the 98.7% that ChatGPT does not mention.
The window of opportunity is still wide open. 86% of non-recommended practices have not even claimed their Foursquare listing. That means the barrier to entry is still low. But it will not stay that way. As more practices realize the importance of AI visibility, the competition for those 3 recommendation slots per city will increase dramatically.
Frequently Asked Questions
How did you test ChatGPT for dentist recommendations?
We asked ChatGPT the same question in 50 US cities: 'Who is the best dentist in [city]?' We recorded the recommendations, then analyzed the digital presence of every recommended practice and a sample of non-recommended practices in the same markets. All tests were conducted in March and April 2026.
What percentage of dentists does ChatGPT recommend?
In our test across 50 cities, ChatGPT recommended an average of 3.2 practices per city. With an average of 250+ dental practices per metro area in our sample, that means roughly 1.3% of dentists get recommended. The other 98.7% do not exist in ChatGPT's response.
What did recommended dentists have in common?
Three things appeared consistently: a complete and claimed Foursquare listing (92% of recommended practices), consistent NAP data across 3+ directories (88%), and specific dental categories listed beyond just Dentist (76%). These three factors were far more predictive than Google ranking, review count, or website quality.
Does having more Google reviews help ChatGPT recommend a dentist?
Review count alone had no correlation with ChatGPT recommendations. However, review content mattered. Practices with reviews mentioning specific procedures (implants, Invisalign, emergency care) were 3x more likely to be recommended than practices with generic reviews regardless of count.
Can a new dental practice get recommended by ChatGPT?
Yes. Several recommended practices in our study had been open less than 3 years. The key factors were data completeness and consistency, not business age. A new practice with a complete Foursquare listing, optimized schema, and detailed reviews can outperform a 20-year practice with poor data.
Were the results the same every time we asked ChatGPT?
Not exactly. ChatGPT's responses varied slightly between sessions, but the core recommendations were consistent about 80% of the time. The top 1 or 2 recommendations were stable. The 3rd and 4th recommendations rotated more. This suggests a tiered confidence system where practices with stronger data signals get more consistent placement.
Get Your Practice Into the 1.3%
We will audit your AI presence and build you a specific action plan to get recommended by ChatGPT, Perplexity, and Google AI Overviews.
Takes 15 minutes. No sales pitch.
Book Your Free AI Visibility Audit