Traditional SEO thinking is to target intent-based keywords like "sell my art for cash." This experiment answered the question one step earlier: Is my art worth anything at all?
These are the findings of carrying out this experiment.
Finding 1: ChatGPT converts at 24.9%
Over 28 days, 122 leads were generated from AI traffic.
My working theory is that AI is qualifying users during their chat conversations. At the threshold where AI like ChatGPT cannot complete the transaction loop. To be a useful assistant, ChatGPT looks for a provider who can fulfill the request and close the loop.
This finding supports targeting outcomes to conversations instead of targeting keywords — which goes against what SEO experts have been saying about AI citations.
Finding 2: Launch to first lead was 33 days
This is one of the most surprising findings in this study. I fully expected the website to receive traffic in the first 90 days. I did not anticipate our first lead from AI to be 33 days from launch. The website was so new that the domain was purchased the same day it went live.
This is a significant finding because it counteracts the current narrative being shared about AI. It was brand new, in a highly competitive space, with no expert backing, no founder or org behind the brand, no external links, no third-party directories, and no social media platforms.
Just a site built to gain citations from AI — and it worked.
Finding 3: ChatGPT outperformed organic search results
During the 28 days measured in the report, ChatGPT delivered 105 sessions, while Google Search Console shows it sent 22 clicks to the website. This was confirmed in GA4 traffic sources.
This is an important finding because it goes against the narrative that if you show up in Google, you show up in ChatGPT. This confirms that is not true. The critical point is that if organizations are measuring their AI discoverability by how they rank in search results, they will lose AI citation market share to anyone who fills that context gap.
Finding 4: Page weight as a structural advantage
WhatsMyArtWorth.com measured at 5,882 tokens and 6.5KB. The top organic competitor for the same query measured at 112,437 tokens and 1.0MB.
This is more of an interesting finding turned into a working theory.
When compared to the top organic search results, I measured each homepage in token size using OpenAI's Tokenizer Tool. My site: 5,882. Top organic result: 112,437.
When two websites, all things being equal, fulfill the same need — and tokens being a real-world cost — the question becomes: is ChatGPT preferring smaller token sites for citations?
Finding 5: The schema email
A user sent an email to an address that was only found in the JSON schema data.
My working theory is that the user asked AI for my site's contact info and it extracted it from the JSON data.
The question remains whether the contact info was stored when the site was indexed by AI, or if the address was extracted from the website's schema data the moment the user requested it.
In both cases, the implication is huge — it confirms that schema data plays a pivotal role in how AI uses your site.
Finding 6: The completion gap drove citation, not authority
AI can explain how art valuation works. It cannot appraise a specific painting on someone's kitchen table. That gap — between general knowledge and specific human action — is what triggered citation.
The second working theory is the liability attached to the question. General knowledge questions like pop culture are low risk. Shopping for new clothes is low risk. Making a financial decision carries real risk. AI does not want to be held liable for giving harmful answers, so the more liability an answer carries, the higher the likelihood of a citation.
An Ongoing Experiment
This is still an ongoing experiment and I plan to publish all findings publicly. You can visit the live site at WhatsMyArtWorth.com.