Meta is considering automated checks on the chatbots’ outputs to ensure accuracy and compliance with platform rules.
Meta Platforms, the parent company of Facebook and Instagram, is set to unveil a new range of artificial intelligence (AI) chatbots with distinct personalities as early as September.
According to reports from the Financial Times, citing people familiar with the matter, the tech giant has been developing prototypes for chatbots capable of engaging in human-like discussions with its nearly billion users.
These AI-powered tools, known internally as “personas”, will feature various characters, including one modeled after former United States president Abraham Lincoln and another designed to offer travel advice in the style of a laid-back surfer. When launched, the chatbots will provide users with a new and exciting search function with personalized recommendations.
Meta’s New Endeavour Raises Questions about Data Protection
The move comes as the company strives to address challenges related to user retention. In a recent earnings report for Q2 2023, Meta CEO Mark Zuckerberg acknowledged the company’s focus on its latest product Threads aiming to bring back users who drifted away after its launch.
Recall that Coinspeaker reported last month that the new social media platform achieved an impressive milestone of over 100 million users within a few days of its debut, beating OpenAI’s ChatGPT. Although the Instagram-inspired app has been experiencing a decline in user registrations, Meta CEO said in the recent earnings call that users are returning to the platform designed to rival Twitter.
With the latest AI products on the way, the company aims to attract more users to Threads to bolster its adoption and strengthen its foothold in the social media sector.
However, it is worth noting that Meta’s new endeavor is not without challenges. The implementation of AI chatbots raises concerns about data privacy and security. With each user interaction, the chatbots have the potential to collect vast amounts of data. While this data can be valuable for improving content and advertising targeting, it also raises questions about how Meta intends to handle and safeguard user information.
According to the Financial Times report, the company is considering automated checks on the chatbots’ outputs to ensure accuracy and compliance with platform rules. By implementing such measures, Meta aims to reduce the chances of chatbots spreading misinformation, hate speech, or other harmful content.
“According to a Meta insider, the company will probably build on technology that will screen users’ questions to ensure they are appropriate. The company may also automate checks on the output from its chatbots to ensure what it says is accurate and avoid hate or rule-breaking speech, for example,” reads the FT report.
Other Tech Companies apart from Meta Exploring AI Opportunities
Apart from Meta, other tech giants, including Apple, are actively exploring AI offerings. Apple’s proprietary framework, ‘Ajax’, is currently under development to create large language models, and the company is also testing its own AI chatbot, internally referred to as ‘Apple GPT’ by engineers.
Furthermore, Facebook and Instagram’s parent company is not the only tech firm venturing into AI-powered chatbots with unique personalities. Character.ai, backed by Andreessen Horowitz and valued at $1 billion, has successfully developed chatbots that emulate the conversational styles of prominent figures such as Tesla CEO Elon Musk and Nintendo character Mario.
Another tech firm Snap has also developed an AI dubbed “My AI” feature that has also garnered substantial user engagement, attracting interactions from 150 million users with its friendly chatbot.