Introducing the next evolution of Related Prompts
Introducing the next evolution of Related Prompts
Empathy is excited to unveil the new phase of our Related Prompts: Related Prompts V.1, a major leap forward in our conversational search offerings. Building on the success of the original Related Prompts feature, the V.1 delivers on-demand, dynamic suggestions at search time, powered by Empathy’s own LLM infrastructure. In this post, we explain how Related Prompts V.1 differ from the V.0 and how it reflects our commitment to sustainable, ethical GenAI and cutting-edge engineering design.
Release and comparison of the Related Prompts versions
Figure: A shopper searches for “Haute couture dresses” and is presented with three static Related Prompts V.0—expansive questions display that are different yet similar to products related to the original query. In the new Related Prompts V.1, these follow-up queries are generated on the fly at search time.
With Related Prompts V.1, Empathy moves from static, offline prompts to fully dynamic query suggestions. Related Prompts V.0 were prompts generated at index time, which means they were index-based suggestions. stored with the catalog, and only updated when reindexed. By contrast, the V.1 are generated dynamically at search time—query-based suggestions powered by our self-hosted LLM service whenever a shopper searches. As one of our recent case studies explains: “the dynamic generation of Related Prompts at query time . . . generating these elements on the fly rather than relying on pre-indexed data . . . overcomes typical catalog synchronicity issues.”
Some key differences lie in how Related Prompts V.1 use advanced language models—such as DeepSeek or Qwen3—running on our GPU clusters to generate real-time, context-aware suggestions. Instead of training custom models, we build structured instruction prompts that include relevant catalog and session data, allowing the LLM to dynamically return the most useful related prompts or questions.
In contrast, V.0 also uses LLMs but at index-time: suggestions are generated ahead of time using static prompts and are stored in the system. While merchandisers can still curate and adjust these prompts manually, the content remains fixed unless re-indexed. In short, V.1 behaves more like an adaptive, real-time assistant, whereas V.0 was a static, index-based enhancement with limited flexibility.
Empathy’s past blog posts lay the groundwork for this update. For example, we previously introduced Related Prompts as a way to synthesize natural-language queries and give shoppers conversational guidance. The new version builds on that foundation with real-time generation and broader availability. (For more on our original Related Prompts feature and GenAI governance, see our posts “Enhancing your commerce search experience…” and “Improving Related Prompts to enhance GenAI governance…”.)
Sustainable AI and GenAI Ethics
GenAI’s rise brings real-world costs—from electricity use to water for cooling—that the industry can no longer ignore. Large language models end up consuming great amounts of energy for computation and millions of litres of water to cool data centers. At Empathy, we tackle these challenges head-on with our infrastructure and design choices. Our LLM services (including the ones behind Related Prompts V.1) run on self-hosted GPU clusters optimized for efficiency. By controlling the hardware environment, we can power each node with renewable energy where available and tune it for low power draw. This yields GPU efficiency in self-hosted environments, meaning faster generation and a smaller carbon footprint.
We also prioritize longevity and reuse. For instance, our pool-design pattern lets different stores share question templates when applicable, avoiding duplicate compute work. We continually refine prompts offline with caching and persistence, so the system doesn’t waste cycles regenerating identical content for every shopper.
Ethical design is just as crucial as sustainability. Empathy’s GenAI is built on a privacy-first, consent-first philosophy. Our entire search platform was conceived by analysing collective intent without ever harvesting personal profiles. Related Prompts (and now the V.1) are generated within a secure, isolated framework that customers can even host in their own cloud. We support zero-party data interactions—shoppers only share what they choose—and we never scrape or train on private catalogs or copyrighted content. In fact, our LLM training pipeline explicitly excludes any merchant intellectual property or proprietary site data.
Putting trust at the center: As our founder Angel Maldonado wrote, “privacy [is] a fundamental human right”, and customers should never feel objectified. Our engineering respects that ethos: all prompt generation is audited for compliance (GDPR, CCPA, etc.), and every suggestion is filtered to avoid personal data or bad content. In other words, Empathy’s Related Prompts are as friendly to the planet and to shopper privacy as they are useful.
In combination, these efforts dramatically reduce redundant processing. We also leverage caching, so if a user queries for X, the generated results are stored and reused for subsequent identical queries—completely bypassing the need to re-run the LLM. In short: we make our AI “green” by design, optimizing for high CPU/GPU utilization and minimal idle time.
Design and Engineering Enhancements
To deliver these new capabilities, our team implemented a suite of architectural improvements behind the scenes:
- Shared question pools: We built a pool-design pattern that lets multiple clients reuse the same generated questions when suitable. This avoids redundancy across similar catalogs and ensures consistent quality. If two stores sell the same core products, they can draw from the same prompt pool, saving effort.
- Automated configuration: New Empathy instances can now spin up Related Prompts V.1 with minimal manual work. We added automation to the onboarding pipeline so related prompts’ templates and related settings auto-provision per site. In practice, spinning up a new store now auto-generates default question sets based on its catalog schema.
- Validation checks: We improved our validation processes to vet generated questions before they go live. Each question passes through AI-driven guardrails (e.g. correct grammar, inventory alignment) and human rules. This step filters out nonsensical or inappropriate prompts, ensuring high-quality suggestions.
- Duplication detection: Our QA system scans across customer environments to detect near-duplicate questions. Whenever two stores have overlapping suggestion sets, we flag them so we can consolidate or tailor them. This avoids hitting shoppers with identical queries on different pages.
- Knowledge enrichment (Perplexity integration): A custom script now queries Perplexity.ai using key product and category terms to scrape supplementary information. The results enrich our site data (e.g. synonyms, attributes, related concepts), which in turn leads to better question relevance. Think of this as auto-aggregating public knowledge about your products to spark new prompts.
- Ethical LLM training: All these enhancements align with our core AI ethics. We rely primarily on foundational models like Qwen3 or DeepSeek, which are publicly available and widely accepted in the industry, as there is broad consensus that their training practices meet acceptable ethical standards. We may also fine-tune models using our own data but we never train on private catalogs or third-party intellectual property, ensuring there’s no cross-contamination of customer data: every shopper’s query remains isolated and anonymous so that our Related Prompts comply with all relevant regulations and our internal ethical standards.
Together, these improvements make Related Prompts V.0 and V.1 a powerful, efficient, and responsible tool for commerce search.
Empathy’s Related Prompts V.0 were already a big step toward conversational, human-centric search. With Related Prompts V.1, we’re taking the next step: on-the-fly generative suggestions that learn from context, respect privacy, and minimize environmental impact. We look forward to seeing how retailers use these new capabilities to make search smarter and more sustainable for shoppers everywhere.
For more details on our Related Prompts journey, see our earlier posts on conversational search and GenAI governance. We’re committed to continuous innovation, so stay tuned for what’s next!