Re:search – Exploring the Invisible Infrastructure of the Web
Our guest contributor participated in a so-called Re:search Workshop where they explored the invisible infrastructure of the web focuing on search and chatbots and compared and discussed results coming out fra searching and prompting them side by side.
By Kevin Klyssing Jensen
Google has gone from being a noun to a verb. If we want to look something up, we ‘google it’ without a second thought. This is what researcher Renée Ridgway terms ‘ubiquitous googling’ and it’s become an unconscious habit, something we do automatically. But more and more people are becoming aware of the underlying surveillance and that searching isn’t as neutral as it once seemed. ‘The Google search process is a black box—its results shaped by opaque systems that rank, advertise, and categorize us into groups, not individuals,’ Ridgway explains.
Nowadays we are witnessing a shift in how we search with the rise of AI and chatbots. Instead of providing a list of hyperlinks, we’re also delivered a single answer that seems authoritative. Summaries that replace exploration with convenience.
As AI reshapes the landscape of information, it becomes even more urgent to understand what happens behind the interface. Workshops like Re:search offer an opportunity to see and question the invisible systems that structure our everyday interactions with the web, along with how much data is collected on us.
Re-search.site
The Research workshop at Aarhus University (AU), led by Renée Ridgway (researcher at AU) and artist/programmer Anders Visti, invited participants to peek behind the digital curtain. They built an interactive platform (re-search.site) that allows users to visualise, compare and interpret results from search engines and AI chatbot responses.
This ‘bespoke platform’ (which means hand-coded) is especially designed for investigating search results during workshops. The first method enabled us to compare search results from two different search engines, revealing how browser choice, data tracking, and location shape what we see. We began by choosing a keyword, based on our own interests and then applied our default settings to search. With the web developer tool’s ‘inspector,’ we copied the ‘outer HTML’ and pasted it into an (open) source-code editor, Visual Studio Code. We then searched the same keyword with another search engine and browser. The re-search site then compared these HTML results, visualising the pages and their ranking differences. Ridgway and Visti call this method ‘data visualisation as transcription,’ which seeks to make the search infrastructures visible and therefore, more tangible to the user.

Figure 1 Re:search comparison between ‘Data Afterlife’ on Edge using DDG to the left and on Firefox using Google search to the right.
I tried using DuckDuckGo on Edge and Google search on Firefox with my chosen keyword ‘data afterlife’ as seen in Figure 1. When I compared the results from the two identical search queries, the re-search site made it blatantly obvious just how much the choice of browser and search engine shapes what we see. I already knew that search results can be riddled with advertisements, but nonetheless, seeing the visual representation made by the re-search site was still thought-provoking and allowed for deeper reflection and analysis. With DuckDuckGo, the results aligned with the academic term ‘data afterlife,’ with its top result being Chicago University’s bookstore and the next six links were all articles pertaining to the same topic. Whereas the Google search results reflected a much more commercial approach, because, out of its top five results, four were bookstores.

Figure 2 Re:search results from keyword ‘AI Sycophancy’ using Duckduckgo.com on Firefox to the left and Duckduckgo.com on Chrome to the right.
What became crystal clear using the re-search.site was a striking reminder that the web we each see is not the same. For example, even privacy-focused DuckDuckGo delivered different results depending on the browser as seen in Figure 2. A search with DuckDuckGo on Chrome included ads, while the same search with DuckDuckGo on Firefox didn’t, clearly stating that the choice of browser is not without consequences. The URLs (Uniform Resource Locator) are also vastly different in length and content depending on what browser and search engine you choose, meaning the one stores more information than the other as seen in Figure 3. I’ve often wondered why some URLs were so incredibly long. When hovering over the interface, the re-search site defined some of the strings included in the URL. Every string is separated by an ampersand (&) and every string has a specific purpose. Information captured in the URL can be where you’re from, when you search, what links you click on, if your search is performed from the specific search engine or funnelled, meaning routed through a tracking system. All of this data is then probably used to personalise you, then shared with other Ad Tech partners and the search companies themselves, who analyse and train their algorithms based on previous user interactions.

Figure 3 Re:search helps users explore the hidden layers of the URL. The left is DDG on Firefox and the right is Google search on Firefox.
Via the exploratory format of the workshop and made visible using the re-search.site, we stumbled upon the fact that Google’s ‘Incognito’ mode, which is marketed as a privacy function, in the end provides the same search results. However, the URL is way shorter when in Incognito mode, meaning it gathers less data on the user.
Another interesting insight pertaining to specific search engines, in this case Bing, is that they redirect their users. As seen in Figure 4, the re-search.site can’t make any connections between the results because every result goes to a Microsoft server (owner of Bing) and is then redirected, so every search query appears starting with ‘bing.com.’ This is apparently done by Microsoft as a way of tracking users, as it collects and processes data on their own servers before sending the user to their final destination.

Figure 4 Re:search results using Bing on Edge to the left and Yahoo on Firefox to the right.
The re-search site offers a selection of search engines. At the time of the workshop, it included American (DuckDuckGo, Bing, Yahoo, Google), Chinese (Baidu), and Russian (Yandex) options. The hope is to eventually include European search engines such as Quant and Ecosia in the choice selection for the user but how they would perform remains unclear. The site also offers a diverse range of browsers to choose from: Brave, Edge, Firefox, Opera, Safari and Chrome.
Chatbot Rodeo
In the second part of the workshop, we made use of an interface they call a ‘chatbot rodeo.’ At that time it compared real-time responses from four different AI chatbots (Gemini (Google), ChatGPT (OpenAI), Llama (Meta), Le Chat (Mistral) to the same prompt. Participants’ queries were sent through a proxy server, an intermediary server that forwards requests while hiding the user’s identity, ensuring anonymity as the responses rolled in.

Figure 5 Chatbot Rodeo, four different real time answers to a user generated prompt.
Participants reviewed their answers and discussed in plenum what they found interesting about their given responses. For many of the prompts, ChatGPT tended to provide the shortest responses, often phrased in a deviating manner, as seen in Figure 5. It also became apparent that American chatbots often refer to themselves with ‘I,’ reinforcing an anthropomorphic view of chatbots, which in turn encourages prosocial behaviors towards them. This sycophancy and these differences weren’t just about style—they revealed cultural assumptions and design choices embedded within AI systems.
In terms of the gathered insights regarding the chatbot responses, one could say they were pretty consistent. Personally, I used the prompt: How much of you training material is gathered from western material seen from a cultural perspective as opposed to middle eastern or east Asia for example? To my surprise, most of the chatbots provided quite informative answers. As shown in Figure 5, the responses aligned with my earlier statement—ChatGPT’s answer was short and deviating, while Gemini avoided specific numbers. In contrast, Llama and Le Chat provided more detailed explanations including percentages.
It came as no surprise that the training data for the chatbots would primarily be from Western cultures, but I still wanted to see if they actually had precise numbers and in what tone they would provide the information. All American chatbots emphasised that the training data was diverse and consisted of a wide range of texts culled from the Internet, while also stating that Western texts may be more significant, highlighting the chance of bias. The American chatbots though acknowledged the imbalance, framing it somewhat as an unavoidable limitation, almost sounding as if they did the best they could with what they had.
Reflections on Re:search
Renée Ridgway and Anders Visti led a hands-on, engaging workshop, one that offered a rare glimpse into the inner workings and infrastructures of search engines, browsers, algorithms and ads that shape users’ results. They all influence what we find, and by extension, how we understand the world around us. The re-search.site makes clear that even something as simple as your choice of search engine or browser can determine what information you have access to and what kinds of data is collected about you.
In the end, the re-search.site emphasises and visualises that search engines and browsers are not passive tools, but active mediators as infrastructures of knowledge. Being aware of how they function is not just a technical skill, but a civic one.
Thanks to Renée Ridgway for feedback and redaction.
Picture: Renée Ridgway to the right and Anders Visti to the left in a workshop in Århus on search and chatbot litteracy. By Kevin Klyssing Jensen