Google Research have a fascinating post about what happens when computers stare at clouds and tell you what they see.
Ok, no they don't. They have a post about what happens when a neural network is trained to look for certain things, given images that don't contain those things, and then "asked" if they can see anything that looks a bit like those things.
Neural networks have* multiple layers - the lower levels tend to spot basic patterns, and the later levels then build up more complex shapes from those patterns. Spotting an edge is remarkably easy for a neural network, as is a corner. And pretty much everything can be build from edges and corners, particularly if you allow them to be curved.
And when you show a picture of reality to a neural network, and ask it what it sees, it divides up the world into these low-level features, and shows you something like this:
Golly, that looks familliar**.
Take a network trained to spot dog faces and pointed it at a picture completely lacking in dog faces, and it will nearly see them everywhere:
And that seems familliar too. Under the influence of hallucinogenics, the brain will find an edge, or a corner, or a fragment that looks vaguely like the tip of an elephant's trunk, and then will extrapolate outwards to say "Ok, we both know that the sky isn't actually filled with fractal meditating elephants, but still, isn't it impressive-looking?"
The images from the article, and some others, are here
- take a look, and you can start to get an intuition for how visual hallucinations work. And once you understand how the brain can decide that it literally sees things that aren't there, conspiracy theories also become very easy to understand as what happens when the part of the mind that spots patterns kicks into overdrive, and the part that checks they make sense takes the day off.
Of course, me saying that is _me_ spotting patterns, imposing meaning on them, checking they fit in with my general understanding of the world, and then passing it on. If either my pattern-spotting or sense-checking has gone wrong, then all of the above may be nonsense. That's why I keep you lot around, to tell me when I'm no longer making sense...*Everything I say about Neural Networks is incredibly simplified. This is (a) because it's been about twenty years since I've done any reading about them and (b) because the last thing any of us wants here is to get bogged down in detail. Hopefully I'm not so simplified that I'm simply wrong - but if I am, rest assured that someone will point it out in the comments.
**Until 2005, magic mushrooms were legal in the UK. Oh, for those more sensible days.
Original post on Dreamwidth
- there are