AI, COMPUTER VISION AND OUR TIME-TRAVELLING, LYING BRAINS

Maurillio Addario ANSA

Maurillio Addario, Analytics & Software Engineering Director

Before you worry too much about the effect that a year-long social isolation has had on me, I promise the title of this post is not as strange as it might seem.

Indeed, the past year has been an odd one – and I won’t get into that – but nothing is quite as fascinating and, at the same time impossibly odd, as the way our own brains work. My role at ANSA focuses on the development of specialist computer systems that aim to, amongst other things, learn how to “see” and identify features in data acquired from oil & gas wells. We are exploring the game-changing insight into cased hole data analytics that can be achieved through systems powered by Artificial Intelligence and Machine Learning.

But, when it comes us as humans, you might well ask: How do “we” do it?

Our (crazy) Visual System

Surely our own visual system should provide a good template on how to evolve the ability to see, to identify patterns, to process visual information coming from our own two eye-shaped biological cameras and make sense of it all, right? Wrong.

It turns out that, in programming terms, our visual system is very badly botched together, with a huge number of “hacks” to make sure things work as expected. If it takes a lot of approximation, making things up, straight up lying and even time-travelling, well, if you are the human visual system, that is just fine.

One of the most interesting and weirdest phenomena in the way we see is called Chronostasis. It comes about because our eyes move very, very quickly, in coordinated shifts called saccades, and, it turns out that most of our vision goes rather blurry during those quick shifts. Now, if this were a programming challenge a software developer would probably simply ignore the visual input during those milliseconds – but we don’t see “blurry”, nor do we have missing intervals; what we see appears to be continuous in time. So how does our brain deal with this lack of “video input” every single time we move our eyes? It time-travels!

Don’t believe me? You can observe this effect yourself with the help of an analog watch with a ticking seconds hand. Watch the seconds hand for a few moments and then look away (moving your eyes only, not your head). When you look back at the second hands the first second will appear to take longer to pass! What happened there is that your brain just lied to you about how long a second “is”, in order to hide the fact that it couldn’t really see anything for up to 500ms while you moved your eyes away and back.

Another way to demonstrate that our vision is not perfectly continuous in time is to simply look in the mirror. You won’t be able to see your own eyes moving when looking from one eye to the other, no matter how hard you try, as that gets masked by your brain. On the other hand, if instead of a mirror you use the selfie camera on your phone, you should be able to see the saccades, as the camera adds enough delay.

Apples and Apples

When discussing the implementation of Ai systems to aid how we make sense of the data around us, it is often easy to make arguments based on how quickly an automated system can perform when compared to its human counterpart. Let’s consider, for example, the identification of a particular pattern in a time-series. It could be the Gamma Ray signature of a particular area of an oil & gas correlation survey, or the pattern associated with premature ventricular contraction on an ECG trace. A specialist system can search tens of thousands of feet of Gamma Ray data sampled at 0.01 ft or a whole year of ECG data in minutes, something it would take a specialist an inordinate amount of time to do.

What is often not given enough focus is how comparable the results really are, based on the same data, but when interpreted by different specialists. We now know that at a very low level what we see is not quite as it seems, that what we “see” and what we “think we see” are rarely the same thing. But that is just the tip of the eye-cberg (sorry… I just couldn’t resist). At a higher level, we humans are often flirting with all manner of biases in the way we interpret information, whatever the source. There are so many different types of biases that the word count of this post could be greatly increased just by naming a few: confirmation bias; apophenia; anchoring; self-serving; status quo; etc. All of these introduce an element of uncertainty, which is compounded when the same information is being looked at by several different individuals, leading to multiple – and often conflicting – diagnostics.

This is perhaps where the greatest value can be derived from the application of Ai & Machine Learning to complex, data-driven answers systems. The ability to compare apples with apples.

At ANSA, we are taking cased hole data analysis to the next level – drawing on our 30 year oil & gas heritage analysing millions of feet of well log data, and integrating that with cutting-edge Ai/ML techniques to drastically improve reporting speed, consistency and accuracy.

Get In Touch

Can we help? Talk to us about your data analysis and interpretation requirements.

Talking Points

Find out what’s been making the headlines from across the world of ANSA.