In response to employee concerns over answering difficult questions about the company posed by family and friends over the holidays, Facebook has allegedly launched a chatbot that teaches workers how to deflect such inquiries.
On Monday, The New York Times reported that just before Thanksgiving Facebook began rolling out a chatbot to its employees that is designed to teach them how to answer difficult questions about the company that they might be asked by friends and family members over the holidays.
The chatbot is called "Liam Bot," and helps workers figure out how to respond when "Mom or Dad accuse the social network of destroying democracy" or allege that "Mark Zuckerberg, Facebook's chief executive, was collecting their online data at the expense of privacy."
Essentially, the AI powering the chatbot demonstrates to employees how to respond to questions regarding hate speech or privacy. Most sample responses are incredibly generic to ensure that they're usable for various topics. For example, Liam Bot recommends for users to respond to tough questions with variations of the following phrases:
- "Facebook consults with experts on the matter."
- "It has hired more moderators to police its content."
- "It is working on A.I. to spot hate speech."
- "Regulation is important for addressing the issue."
Additionally, the bot will link to company blog posts, news releases, and relevant statistics from a report about "how the company enforces its standards."
These answers were formulated by the company's public relations department and echo what executives have publicly stated in the past about such controversial issues. The chatbot has apparently been in the works since this past spring.