What I Learned From Running a Concierge Search Engine
In October 2021, I ran a concierge search engine called Genius Search. That’s a fancy way of saying that I asked my friends to pay me $10/month to research an unlimited number of open-ended questions. I would write and send them a summary of each topic as quickly as I could.
It may seem like I volunteered to become the world’s most underpaid personal research assistant. But I was testing a hypothesis that there is a better way to search the web for complex questions.
This is a follow up to A Google Replacement Will Not Look Like Google, where we explain what’s wrong with the current state of search engines and outline the opportunity to index new forms of content.
- What are complex questions?
- Method
- Lessons Learned
- Queries are complex because life is complex
- Complex answers pull from many disparate sources
- Complex queries need alternative corpora
- Finding answers is an iterative process
- Complex answer-finding is automatable in 2022
What are complex questions?
Complex, open-ended questions that have no definitive or straightforward factual answer. Think about the Google searches that you might personally spend 30 minutes on, where you open a bunch of tabs and skim a bunch of web pages.
- Product comparison:
- "What kind of inflatable kayak should I buy?"
- "What are the pros and cons of a Subaru Forester vs. a Toyota RAV4?"
- Travel planning:
- "What are some romantic weekend getaways within driving distance of San Francisco?"
- "Where should I stay in Oahu if I'm traveling with kids?"
- Miscellaneous curiosities:
- "What's going on with this computer chip shortage?"
- "How can I make the transition from a software engineer to a product manager?"
- "If I liked Parasite, what other Korean movies should I watch?"
Google does not do well on these queries. Most of the company’s improvements to search are to pull in structured, short, factual data onto the search result page. Complex questions cannot be answered by short facts. Secondly, the standard blogspam/content farm content that surfaces at the top of Google only contains surface-level information. The real content is either in specialist sites or
Method
I wanted to test the following aspects of building a product around complex searches.
Market:
- Do people have queries for which they want answers but are simply too hard using current search tools?
- Are they willing to pay money for this service?
Product:
- Do the answers to these complex queries exist on the Web?
- Can a reasonably intelligent human (me) find answers with sufficient effort?
- Can this process be automated given current technology?
I solicited queries from my alpha testers. I rejected any queries that would require access to specialized databases (e.g. LexisNexis) or specialized knowledge (e.g. reading financial reports).
While researching, I limited myself to the results on the first 3 pages of Google. I could perform multiple search queries to answer multiple aspects of the complex question, or reframe the question in response to facts I discovered. I allowed myself to read each page as closely as needed, and also click around on other pages within the same domain.
My report was generated exclusively from pulling quotes from the web pages I found, with as little original thought as possible. This mechanical way of parsing webpages (called “extractive summarization”) would lend itself to being automated.
Lessons Learned
The most surprising thing is that people actually signed up for Genius Search. I received 20 questions from 5 people. Before you balk at drawing conclusions from an n=20 study, you should know that major ranking decisions inside Big Search Companies You’ve Heard Of are routinely made on the basis of a handful of “motivating queries.”
And overall, people seemed to like the results:
“amazing, I spend an average of 1-3 hours researching a ~$50-500 purchase before buying it. would happily pay for "genius research" to take care of finding the "best of" for things.”
I actually did like 10-12 hrs of research [on this myself] and I felt like I could have gotten by with this memo instead
Here’s what I learned from the exercise:
Sign up for our mailing list to get monthly updates and early access!
Queries are complex because life is complex
When people think about search queries, they have a tendency to focus on the ones that can be answered by “facts,” for example, finding the solution to an error message while programming.
In an Eisenhower matrix, such queries are important and urgent. For these queries, almost any search engine will do and the user will brute-force their way to an answer even if the search engine fails them.
But the questions I received from Genius Search were in the upper left quadrant: important but not urgent. Here’s how they broke down by topic:
Topic | Count |
Product Reviews / Purchasing advice | 5 |
Finances | 5 |
Health | 4 |
Career | 3 |
Other | 3 |
While I can’t share all the queries, they tended to be starting points in a larger journey. People asked for product recommendations to start a new hobby, or for evidence to support a career transition. Since these queries had no singular answer, no single document would suffice and no algorithm could perfectly rank the content to deliver a tidy answer.
Complex answers pull from many disparate sources
To write a Genius Search report for the average query, I had to perform ~3 different Google searches and find answers from 15 different search results.
For example, one question I received asked “how much money has flown from traditional mutual/index/pensions funds into venture capital this year?” It was impossible to get a crisp answer to this, but snippets could be found across many different web pages. Some articles focused on particular hedge funds that were crossing over into VC and the size of their latest investments. Other articles focused on the percentage of limited partners (LPs) that were interested in shifting their investments towards early-stage venture capital.
To assemble a 360-degree picture of that particular question required reformulating the query several times and pulling quotes from a variety of sources. This was a trend for most questions people asked.
Complex queries need alternative corpora
At this point, it is folk wisdom to append “reddit” or “forum” to the ends of queries to get real answers from Google. This wisdom held up well during my research.
“What to consider when buying Hyundai Tucson PHEV vs. the Toyota RAV4 Prime?”
For this question, the usual automotive content farms were useless. They parroted the same few facts from the manufacturer's spec sheets. The real substantive answers can from this user’s post on a Club Lexus forum thread. This Youtube video also did a good job summarizing the differences between the non-US versions of these cars.
So it seems that not just forums but also videos are good sources of information when it comes to long-tail, complex queries. A search engine that excels at complex queries needs to think beyond web pages to satisfy users.
Finding answers is an iterative process
Sometimes in search, the real answers are the friends you meet along the way... or something. At least in my research, the terms and phrases that you encounter while searching are just as informative as the concrete “facts”.
“What’s the difference between different types of retinol anti-aging active ingredients and which is best for me to use?”
The answer to this query didn’t start coming together until I had encountered enough terminology to understand the different concepts involved. There are many compounds that are considered retinoids and there are many formulations of those compounds (e.g. creams, serums, etc). To get concrete answers to a query like this, you must issue queries about particular compounds. So the process went something like this:
- Search for a high-level question
- Find the words that people are using
- Search for the same question but using specific terminology
- Repeat
“ What are the best online shops for jewelry for [my wife]? Especially watches and earrings. Nice pieces that will last a long time, but not overpriced/ostentatious. Possibly those that are well-liked by her peer group”
The same held true for this product purchase decision. There is no one resource that could answer a question like this. But initial queries like [millenial jewelry] yielded many brand names. I selected a few names that kept cropping up and then searched for specific reviews/experiences of those brands. Again, the search process involved getting a lay of the land before honing in on answers.
Complex answer-finding is automatable in 2022
Finally the upshot: it is feasible for a computer to do what I did as a concierge Google searcher. The answer to many of these questions was clearly on the Web, though not always highly ranked by Google. This is simply a question of crawling and indexing different corpora.
I could satisfactorily answer most questions by simply copy-pasting the relevant snippet from a collection of web pages I retrieved. Recent advances in NLP, especially Transformer-based models for semantic retrieval, already make this a very tractable task.
Thinking farther out, there is research being done on generative text models that are “grounded” in facts retrieved from known external sources. This means that a future GPT-like model could summarize passages that it has retrieved from the Web, without making up wild stories of its own.
The opportunity is wide open to do better on complex queries than Google. Based on my findings, a search engine for complex questions would surface relevant snippets from multiple diverse documents, highlight frequently-used phrases that the searcher should be aware of, and offer tools to help the user refine their query based on the content of the previous search.
These are some of the ideas we are experimenting with as we build a new kind of search engine. To learn more, Follow us on Twitter or Send us an email