There are a few questions I often hear when engaging in conversations about Artificial Intelligence (AI). Is AI worth pursuing? Could AI get out of hand and become a threat to humans in our current world? Could androids like Erica become maliciously threatening machines?

It seems that a number of top technologists and scientists are worried about this new phenomena. Even though, Elon Musk, Stephen Hawking, and Bill Gates are not against the idea of AI, they would rather see it being developed with due caution, and in particular artificially intelligent-driven weapons and control systems. This type of AI could be used to attack strategic targets without any interaction or direction from humans. These are the AI machines that are in the limelight.

There are major challenges facing AI development. It is plausible that someday AI may become too developed and so advanced that the AI machines and robots could fully misinterpret a problem or a mission and decide to tackle it in such a way that it would cause more destruction than good. Machines with AI brains could eventually decide that they want power of their own. The real potential threat posed by AI is that it will eventually be used in harmful ways by humans that are driven by greed and power.

Still the potential for AI is ginormous. AI is already used to reduce motor claims, to control and manage disasters, to prevent fatal accidents, to decide which drugs will be the most effective to eliminate diseases, or to pre-emptively determine our greatest wants and needs. For instance, Facebook’s AI-enabled facial recognition system can recognise individuals in photos so that it can develop the most appropriate and intelligent recommendations. The system also learns to adaptively evolve as it goes and is now capable of recognising an individual through other traits, such as how a person stands or by the clothes that person wears.