Testing GEO/SEO with Adobe LLM Optimizer

Testing GEO/SEO with Adobe LLM Optimizer

February 24, 2026 0 By Tad Reeves

I’m doing a test here to see what it takes to get LLMs (ChatGPT, Perplexity, Grok, Gemini, etc) to be able to take non-server-side-rendered text off of a page, and actually cite it in their responses.

My testbed: I recently wrote this post on our AEM Meetup in NC, and in that post I have a section “PDF resources from the event” which has an AEM Edge Delivery block that generates a list of related PDF files out of the AEM Cloud Service DAM. In that list, there are a bunch of AEM/EDS Architecture diagrams that should theoretically be GREAT for an LLM to cite. These are PDF files that LLMs tend to love, so long as they can find them. But right now, despite some good traditional SEO, zero of them are coming up.

I specifically, actually, created these blocks so that the juiciest part of the data, from an SEO perspective, would be client-side rendered. Meaning: the list of VERY-CITATION-WORTHY PDFs, along with their context (the titles, descriptions and links) would be client-side rendered, and thereby (theoretically) invisible to most crawlers.

48 hours after publishing the article, it’s been clear that zero LLMs (Gemini, ChatGPT, Grok, Copilot, etc) have been able to give me a link to the PDF, even when I specify the page that contains the link to them.

Given this testbed, I’m trying to get a prompt (which I won’t put here as I don’t want to dirty the results) to cite this article or these PDFs. I’m measuring this using Adobe LLM Optimizer with a few variations of the prompts that I’m expecting someone to use when finding these documents.

One of the next things we’ll be trying in this experiment is using Adobe LLMO’s Optimize-at-Edge tech to generate pre-hydrated HTML specifically for the LLMs, so that they pick up this text that otherwise would be hidden from them.

Experiment kicked off today (2/24/26) – we’ll see where this goes!

Update 2/25/26: Implemented Adobe LLMO Optimize-at-Edge

As of today, the LLMO Optimize at Edge routing is enabled on our CDN for Adobe LLM Optimizer.

This means that a custom version of our site is being essentially hosted-in-parallel by Adobe gear, creating a pre-hydrated, LLM & crawler-friendly version of the site which theoretically should start to pick up the text we’re looking for.

As of last night, still, if I told an LLM (or google) to “find me a diagram on blog.arborydigital.com which is a diagram of an example Edge Delivery Services / DA / AEMaaCS implementation, showing authentication, authorization, SSO and configuration auth workflows”, zero of them could find this for me, even when I pointed them at the page where the diagrams were held – indicating that none of them could scrape the JS-driven text off the page.

Once these Edge Optimizations are done, I’ll update again!

Update 2/26/26: Deployed Pre-Render

As of yesterday, whilst Traffic Routing was indeed complete, there were some manual configs that had to be done on the Adobe side in order for the UI to start accepting configurations.

All domain URLs were then onboarded in pre-render.

LLM-friendly summaries were also added:

We’ll now wait for those to kick in and start being pulled, and then update our results on our PDF readability!

Update: 2/26/26: Success! Some LLMs are IMMEDIATELY able to now find the PDF URLs

I waited an hour after deploying the above edge optimization on LLM Optimizer, and then re-tried the PDF searches. The test prompt I’ve been trying is:

Find me a PDF document on blog.arborydigital.com with the description "Diagram of an example Edge Delivery Services / DA / AEMaaCS implementation, showing authentication, authorization, SSO and configuration auth workflows"

Previously ZERO LLMs were able to find a link to the PDFs in question. Now, we’re already getting results – seeming first with those LLMs which do a live fan-out search and pull live URLs (rather than just cached search results).

ChatGPT was able to get the exact PDF URL (which was in client-side JS, now being fed pre-rendered to the LLM by Adobe LLM Edge Optimization:

Grok was also able to pull the exact PDF URL:

Copilot and Google AI Mode were both still not finding the PDFs themselves (just the HTML heading that already existed), while Perplexity can pull the JS-rendered text, but cannot find the link.

Still – this is after an hour, and we already have recovered some visibility!