4 Reasons Facebook's Artificial Intelligence Failed | TechFunnel | IT News

4 Reasons Facebook’s Artificial Intelligence Failed

Facebook’s Artificial Intelligence

Facebook has brought the world closer together over the last 15 years with their Application and website. Other entrants into the social media ring to take on Facebook (Myspace, Google+, etc.) have all come and gone, but Facebook still rules the roost. With its IPO offering a few years ago, Facebook has only gotten stronger and more resolute in its’ reach into modern society.

Facebook has tinkered with its algorithms and user experience over the last few years. Advertisements and page suggestions are all tied in with the user’s browsing history and what he or she has “liked” on Facebook subjecting the user to Facebook’s own version of SEO. They have implemented different things behind the scenes and one of the most talked about was Facebook Artificial Intelligence.

AI has been around for quite some time; it was first engineered into computer programs like the famous chess battles between Garry Kasparov and Deep Blue in the 1990s. The very first chess-human match was all the way back in 1956 between MANIAC, a computer developed at Los Alamos Scientific Laboratory and a chess novice. MANIAC defeated the novice in 23 moves, unknowingly kicking off AI in the process.

We’ve all seen futuristic movies detailing a desolate wasteland being run by our computer overlords. It’s a scary glimpse into a horrific future, we all remember HAL-9000, the AI computer that went rogue in Arthur C. Clarke’s essential 1968 masterpiece “2001: A Space Odyssey”. If Facebook’s AI Chatbot program has any precursors to the immediate future, we are a long way off from having any future HAL’s.

#1. More Testing Needed

Why was Facebook’s artificial intelligence shut down? To answer this question, we look to what Facebook was trying to achieve with the chatbot project. Facebook, according to their FAIR (Facebook Artificial Intelligence Research Lab), was trying to develop “dialog agents” to negotiate with each other using machine learning. Maybe they were looking to replace actual human being agents on the receiving end of user chats with customers at some point in the distant future? That’s speculative, at best though.

During the process of training these chatbots, the chatbots started conversing in their own language between each other. Looking at the transcripts, it almost looks like baby talk in some points, but then again, we all talked like that at one point in our lives. Facebook decided to suspend the chatbots in 2017 for the time being until further development could happen.

#2. Communication Breakdown Between AI & Facebook Infrastructure

In more recent times, Facebook’s Artificial Intelligence systems again had a failure of epic proportions. This happened last week when its’ systems failed to detect the New Zealand Mosque shooting video that was live-streamed as it happened. The video was eventually pulled, but by then…. nearly 20 minutes of very graphic video was streamed to the entire globe of this senseless and horrific act.

#3. More Time for The AI to Learn

Facebook has claimed that the AI has grown in leaps and bounds over the last few years, but it’s not perfect. This is one of the reasons why it’s failed so far is that it is indeed not perfect. Since AI is a learning computer, everything the computer takes in is in flux and constantly changing. Not unlike us humans in our earliest stages of development, we’re constantly seeing new things and having new experiences, learning to talk and to write. Mistakes can and will happen at that stage of development, Facebook’s Artificial Intelligence has been in that stage for the last few years.

#4. More Adjustments to The AI Recognition Bot Are Needed

One of the reasons that Facebook’s Artificial Intelligence has failed is the learning process of the AI. In the case of the New Zealand massacre video, the AI had trouble discerning between the graphic real-life video from the similarly “graphic” video game shooters that gets live-streamed for public consumption. With time and improved learning capabilities and improved technology becoming more prevalent, these gaffes will lessen over time. As previously stated earlier on, AI has a long way to go before it gets things right.

Facebook will not be daunted by these setbacks as they feel that AI will eventually become a viable part of society, and perhaps one day they will. In the meantime, these failings and missteps are going to continue in the short-term until research and development improve the learning process and CPU’s of these AI computers.

Danni White
Danni White
Danni White is the Director of Content Development at Bython Media, the parent company of TechFunnel.com, OnlineWhitepapers.com, BusinessWorldIT.com, List.Events, and TheDailyPlanIOT.com.
    Top

    This website uses cookies to ensure you get the best experience on our website. Privacy Policy