Artificial Intelligence and Stupidity: can robots be smart?

Codemotion
5 min readDec 13, 2019

Are robots going to steal your job? The answer is probably yes, but this won’t be any time soon. AI bugs are a thing so let’s talk about Artificial Stupidity.

Why do we fear Robots?

Everyone who has ever heard of Artificial Intelligence knows that sooner or later machines will be smarter than us but are we close to this kind of intelligence? Well, not at all and Alex Fernàndez, Developer and Coordinator of MadridJS and NodeJsMadrid, clearly showed us why in his talk at Codemotion Rome 2019.

At the very beginning of the AI era, we were very optimistic about it. Everyone thought that Artificial Intelligence would be an awesome servant leading us to a new Sparta where citizens dedicate themselves to literature and science while slaves do all the work. In 1966 Simon Ramo, American engineer and father of the ICBM, wrote:

“Our goal is to use technology to extend our intellectual efforts and our spiritual aspiration.”

The creepy feeling of becoming inferior started haunting us when in 1997 the Artificial Intelligence Deep Blue defeated the chess world champion Kasparov at his own game. We realised that an algorithm could be smarter than us.

But were we really defeated by machines or rather by ourselves? After all, Deep Blue was programmed by Murray Campbell, definitely a human being.

So why do we fear Artificial Intelligence this much? Antonio Casilli, sociologist and professor in Digital Humanities, once said:

“The robot myth has been used for centuries to discipline the workforce. Workers might be replaced first by a steam machine, next by an industrial machine and now by an intelligent machine.”

Are we overestimating Robots?

In the last few years, manufacturers showed us really awesome machines, capable of all sorts of things. We saw walking and jumping robots, industrial robots and even battlebots. They look fast, precise and efficient. But are they?

Alex showed us the Tomra Sentinel II, a tomato sorting machine. Its purpose is to divide red and green tomatoes, but in the end some red tomatoes were rejected and some green passed the test. Still impressive considering how fast it works, but far from being perfect.

Another case could be moving robots. The Mini Cheetah from MIT is a great example of how robots should walk, recover from falling and so on. The problem is that the vast majority of robots have remarkable movement difficulties, like dropping themselves to the ground. Even the acclaimed Opportunity Rover can only probe or move, not both at the same time.

Robots clearly can’t compete with us for now, especially on the movement side. But what about AI? Let’s look at self-driving cars: obviously cars have great mobility on their terrain of choice, asphalt.

After all, as Alex says, the comparison is between:

“State-of-the-art sensors VS primates in a metal box.”

So, if the “state-of-the-art sensors” algorithms are correct, what could go wrong?

A lot, apparently. Obviously, there have been huge improvements in the last few years, but the number of crashes per 1,000 miles occurring with a self-driving vehicle is still higher than with a human driver. Even considering drunk drivers.

Also, in the sector of driving AI, Google is not the only one working. This technology, of course, is appealing to a lot of companies so it’s not hard to imagine a Tesla self-driving car or a BMW one. But what about a machine from a less expensive and less reliable company? After all, low cost options exist in every kind of business and, usually, they are the most popular. Are we ready for hundreds of buggy metal boxes speeding in our streets? Probably not.

In 1962 the visionary science-fiction writer Arthur C. Clarke said:

“The automobile of the day-after-tomorrow will not be driven by the owner, but by itself. It may be one day a serious offence to drive an automobile on a public highway.”

Unfortunately, “the day-after-tomorrow” is yet to come.

What about Deep Learning?

Deep Learning is a branch of Machine Learning based on learning data representations instead of task-specific algorithms. As such, a machine implementing Deep Learning works similarly to a human neuron.

Chess is a very popular way to test your AI, so let’s go back to that for a minute. Stockfish, the algorithm that won the 2016 Top Chess Engine Championship and was used by every human chess professional to train, has been annihilated by AlphaZero.

AZ, an AI based on Deep Learning and produced by Google, won a 100-game match without a single loss after just nine hours of training. The chess world, as well as the Artificial Intelligence research world, was shocked by such an event.

How was this possible? AlphaZero has been trained with more than 5,000 Tensor Processing Units and runs on more than 44 cores while Stockfish runs on a regular PC.

And let’s consider this: a complete neural network, just like the one AlfaZero uses, has roughly the power of a single human neuron. And we have billions.

Moving on to Face Recognition, it can lead to hilarious problems. In general, humans are really good observers, amongst the best in nature, but when it comes to recognising patterns like scrambled images FR is really efficient. Nonetheless they are easily fooled, since one single wrong pixel is enough to trick a neural network.

This and others are the reason why an AI specialised in Face Recognition gave the UK police a hard time. The system had a 92% of false positives that escalated quickly to an astonishing 100%. In case you’re wondering, yes, they were fined for this.

So, should we use AI?

After all these considerations you could be skeptical about AI usage but, frankly, you shouldn’t be. It’s only a tool that we are still learning to use properly. Also, the problems that we discussed are only optimisation problems and they will surely be corrected.

But we should remember the importance of ethics in building Artificial Intelligences. An AI could be easily misused by some dishonest politician. Today could be a “vote-maximiser” algorithm, but tomorrow, who knows?

Furthermore, algorithms tend to perpetuate their creator’s beliefs and this leads to serious unfairness problems.

So you shouldn’t be worried about AI stealing your work, at least for now. Let’s take it for what it is: an awesome tool, but nothing more.

Just don’t give it a kill switch.

--

--

Codemotion

We help tech communities to grow worldwide, providing top-notch tools and unparalleled networking opportunities.