Climate Chatbot

Leveraging AI and Natural Language Processing in collaboration with the climate desk to answer users climate related question and aid in discovery of content that matters to them.

See press releases:

The Washington Post Press

Axios Press

The Verge Press

The Hill Press

CNN Reliable Sources Press

GadgetBond Press

EuroNews Press

Nieman Lab

Role
Senior Product Designer/VQA

Year
2024

Objective

There is a wealth of information that The Post has published on climate related topics, climate impacts, and solutions. So much so that users feel overwhelmed, and it is daunting for them to try and grasp all that is out there. It’s unclear to users where they can begin if they are interested in learning more about the climate space.

We created a landing page with a user facing search interface where readers can ask questions on any climate related topic and AI will generate a response from The Post journalism. This tool leverages AI and natural language processing to serve readers.

Goals

Relevance

Use a search and information retrieval system to find the most relevant published articles in the Climate section for any given query

Summarize

Sending articles to a Large Language Model to summarize and formulate an answer to the query based solely on the defined corpus of articles and information

Recirculate

Surface discoverable content and recirculate users to climate coverage in a seamless way

Results

As of August 8, 2024

Pageviews

67,997

Submitted questions

33,217

Suggested questions

65%


Design Process

    • As a user, I want to ask an AI bot a question I am curious about having to do with climate science and have it provide me a succinct answer.

    • As a user, I want to see suggested questions The Post think would be a helpful starter.

    • As a user, I want to ask a question and give feedback on how well it was answered, the sourced articles and their relevance to my interests.

    • As a user, I want to ask an AI bot climate related questions and when it can't answer it, I want to be given an explanation as to why and a relevant CTA to continue my discovery journey.

  • Given the experimental and relatively new nature of AI utilization, our objective was to meticulously select language that effectively communicates its operational mechanisms, our strategies for ongoing model enhancements, and current risk factors. In this project, we prioritized comprehensive language that is:

    Educational;

    • Educate the user that this involves AI and is experimental
      Always present and prominent throughout the flow

    Informational:

    • Educate that any response is solely based on our published reporting (and only ours)
      At least present on initial landing but important throughout as well

    • Show a clear methodology section aka “how it works” section that goes a bit deeper into how the tool works
      Can be treated as an appendix and secondary

    • Imply if not outright state there may be mistakes in answers “e.g. This is a beta experience. Please verify the auto-generated answers by consulting the full articles. Notice a mistake?”
      Always present and prominent throughout the flow

    1. Queries per user

    2. Qualitative feedback

    3. Return visits

    4. Pageviews per session

  • Our previous VQA process was hindered by the absence of a streamlined system that could be utilized by the entire team to ensure effectiveness and timeliness. In response to the tight deadline for this project, my primary objective was to optimize the VQA process between design and engineering, ensuring that all items were accounted for while delivering a highly functional, accessible, and delightful product. I developed and implemented a new VQA document featuring a tagging system designed to:

    • Clearly define all VQA issues and bugs

    • Explicitly indicate the priority level of reported issues

    • Accurately translate items for the engineering team

    • Effectively communicate approvals

    • Clarify any remaining issues

  • Through this project, I have realized the critical importance of meticulously informing users about the involvement of AI in experimental engagements. We dedicated a considerable amount of time to refining our communication to ensure clarity regarding functionality and associated risks.

    Furthermore, I conducted extensive deliberation on optimizing the VQA process. Given the project's rapid turnaround requirement, I reviewed past successes and challenges to develop a comprehensive document for tracking reported issues and approvals. This document was instrumental, as it was shared among designers, product managers, and engineers, becoming an essential component of our daily stand-ups to monitor progress leading up to the launch.

Concepts