The Limitations of Perception

What humans can perceive is not reality in full, but our own picture of reality, constructed by our brains from clues to reality collected by our senses. Science allows us to understand far more, and provides instruments to extend our perception, allowing continual improvement of our picture of reality, but know we do not yet have a complete picture.

A famous thought experiment by Plato is referred to as the allegory of the cave:

In the ‘allegory of the cave’, Socrates describes a group of people who have lived chained to the wall of a cave all their lives, facing a blank wall. The people watch shadows projected on the wall from objects passing in front of a fire behind them and give names to these shadows. The shadows are the prisoners’ reality but are not accurate representations of the real world. Three higher levels exist: the natural sciencesmathematicsgeometry, and deductive logic; and the theory of forms.

From Wikipedia page on Allegory of the cave

This page looks at a simpler more direct view on how ‘there is far more to the universe that our senses reveal’, the counterintuitive things we have already learnt and what is still to be learnt. Perhaps, less profound than the point made by Plato, but still an interesting perspective.

  • Background: Our Senses
  • Perceptions and Limitations
    • images & movies
    • surfaces
    • colours
    • matter itself
  • Extended Perception
    • Telescopes And Our Place in the Universe
    • Radio Telescopes
    • Gravitational Waves
    • Neutrino Detection
    • Matter vs Antimatter
  • What we suspect, but cannot yet detect.
    • Dark Matter
    • Dark Energy?
  • Conclusion

Background: Our Senses

An Array of Senses: But a visual image of the world.

While as humans we have even more than the traditional 5 senses, those traditional senses play the most significant role in allowing our mind to build a picture of the world around us. I say a ‘picture’ of the world around us, as that is how most of think. A visual image, rather than sonic image, or image of textures or smells, of what is around us. It is sight that plays the major role in allowing us to build an image of the world arounds us.

We learn from touch additional properties of the objects we see, but once beyond infancy, we no longer need to touch everything to recall those properties. While some without sight have been able to use sound to build a picture of the world around them, for most of us this is an unexplored potential skill, as it is much simpler to learn to build a picture of the world around us from the information received by our eyes. However, either case, it is our brain the builds our picture of the world around us from the information from our senses. We actually see more with our brain, than with our eyes.

The picture in our brain primarily is built primarily from sight, and other senses such as sound and smell are mostly used, either to refine the picture, or tell us where to look. It is difficult to grasp how limited the role our vision plays in providing our mental image of what is around us. Firstly, our eyes only return information on a very limited field of view at any one time, which is approximately equivalent our thumbnail at arms length. Even the determination of colour does not simply rely on the raw information from our eyes, but instead is interpreted by advanced computation in our brain that uses many clues to determine different actual colours even from the same information reaching our eyes. The colour we see becomes relative.

Perception Limitations

Pictures & pixels

A photo can ‘record’ a visual image, but the way we represent photos relies on a limitation of perception. From a photo, we are able to reconstruct in our brain the same ‘world’ we would construct if we were at the time and place the photo was captured. The ‘photo’ can reflect or emit photos in a sufficient imitation of the way the photons would reach us from the original object, that our brain can build a 3d map from a photographic image alone.

Yet, look closely enough at a photo or screen image, and all that is present is ‘dots’ in the form of film grain in older photographs, or ‘pixels’ in newer, digital photographs. Areas of closely spaced ‘dots’ are seen by the human eye/brain as continuous planes the same shade. Convenient for enabling photography, and the production of screens with alternating pixels of different colours, but it does reveal that we not actually seeing the full detail of what we are viewing. We could consider this a very convenient limitation, but it is still useful to be aware that what we perceive, is not quite what is there to be seen.

Motion Pictures.

Another limitation of our perception has also proved quite convenient. At some point it was noticed that if you flick through slightly altered images, the result is perceived as pictures that move. The ‘frame rate‘, or number of images per second, varies not just between pigeons and humans, but even for just us humans, as with the motion to be displayed. The first ‘movies’ used anywhere between 16 and 26 frames per second, but while the lower rates did settled still produce the illusion of motion, it could be ‘jerky’. By the 1930, 24 frames per second was the standard for the cinema. Analogue Television standards had a frame rate of 50, or 60, frames per second depending on the national alternating current frequency (60Hz in US, Canada, and Some of Latin America and Asia, and 50Hz for most of the world), but through ‘interlacing‘ only updated half of the frame was originally updated for each image, resulting in 25 or 30 frames per second until digital television. This required repeating every 4th frame in 60Hz systems (3:2 pulldown) so 4 frames became 5, while the programs just ran slightly fast in the rest of the world, since running at 25 vs 24 frames per second is a very small change.

But digital computers, and digital television have changed the world, and even how motion pictures are filmed. It turns out that humans do notice some flicker when things move quickly at 24 frames per second. The effect was even considered a desirable part of the ‘movie experience’ and when the film ‘The Hobbit’ introduced 48 fps (48 frames per second) back in 2012, it was controversial and many were outraged or otherwise negative. However, we now even have mobile phones with refresh rates or ‘frames per second’ as high 144 fps as the time of writing. So despite 24fps being sufficient to convince our brains images are actually moving, this frame late does still add ‘motion blur’ which frame rates can eliminate. In the end, there is still a frame rate above which we cannot differentiate a series of still pictures from true motion.

Colour Vision.

Our perception of the frequency of photons is by way of colour. Unlike the frequency of sound, where we can hear not only very slight differences in frequency and detect two frequencies in harmony from the same point, the trade of for having some many individual points of light detected is that for each point in our vision, we see a single colour.

While white light consists of the entire spectrum, and every colour on the spectrum can be generated by a single frequency.

Brown, and many other colours, are not not on the spectrum, because those colours require more than one frequency. So when people say ‘all the colours of the rainbow’, the rainbow does not include all the possible colours we see.

Many very different combinations of light frequencies that look different to many other animals, of when measured using a spectroscope, can all look the same to us. For example, we will see a combination of blue light (460nm), green light (550nm), and red light (700nm) as being that same as white light, even if many frequencies such as violet light 404 nm and yellow light (600nm) as missing.

. Slightly increase the intensity of the red light, and we then see the result as orange, or alternatively add blue light, and in place of yellow, we see white, which is not even a colour on the spectrum. In fact, many colours are not on the spectrum as the spectrum created by a prism has only a single frequency at each point. Brown, and other colours not on the spectrum, because those colours require more than one frequency. However, to our eyes, even most of the colours on the spectrum look the same as colours made from more than one frequency.

This is because our eyes have only three colour sensors: Blue Colour Sensors, Green Sensors(or Green/red receptors), and Red (Red/green)sensors. The twist is that each sensor responds to wide range of colour. The Blue sensors actually sense all the way from violet through to green. The Green sensors all light from blue through to red, and the red sensors sense all light we can see, although with varying sensitivity. We use the combination of signal strengths from all three sensors to detect colour.

Blue, Green-Red, and Red-Green Sensors

To the left is a reasonably accurate graph of how the sensors in our eyes respond to different frequencies. Note the significant overlap of the red and green. Green-red, and Red-green receptors respond to almost all visible light, but with varying sensitivity. Our eyes do not see ‘red’ just because there is a response from the ‘red’ sensor, but because our brain determines ‘red’ from the combined response from ‘Red-green’ and ‘Green-red’ sensors, and the absence of response from the ‘blue’ sensor. With almost all light there is some response from the green and red sensors, and it is only by the ratio of this response that we can determine colour. The strongest overall response to frequencies is to orange and yellow frequencies, which cause near maximum response from both red(red-green) and green(green-Red) sensors. The relatively stronger the red(red-green) signal compared to the green(green-red) the more ‘red’ we see the colour, as opposed to seeing yellow or orange.

Wavelengths of ‘RGB’.

Computer screens, projectors, or other light sources, can can mimic almost any colour using just three frequencies, which can really be considered as pure ‘Blue, Green, and Red’. These are not an exact match for frequencies to which our 3 colour receptors respond.

While a ‘blue’ signal on its own will trigger only our ‘blue’ receptors, both the ‘green’ and ‘red’ signals will sensor will each trigger two sensors not just one, but in different ratios allow our brain to determine the result as either either green or red.

Now again consider the colour spectrum as generated by a prism, in contrast with the wavelengths of RGB above. How can we see the violet which is on the spectrum at 405nm, when even the blue LED does not produce any 405nm light? The answer is that the diagram of our optical receptors reveals for this frequency of 405nm, both Blue and Red receptors will respond. So by using both Blue and Red LEDs, in the right ratio, the result will look violet to us, even though the light itself is not violet.

Further, if you add green LED light (560nm) and red LED light (670nm), we will see yellow matching what we would see for yellow light (605nm). This means that when viewing images of a spectrum on your screen, the images can look just like a rainbow, even though many of the actual frequencies of light of in a rainbow are not there. What is on the screen is not a very good reproduction of a rainbow. Look closely at the image of the spectrum on your display, or through a magnifying glass, and there is no violet or yellow, but only red green and blue dots. Move back, and we see the yellow and violet. The image works quite well for human eyes.

The colour image on screen looks correct at normal distances to human eyes, but would not look correct to the eyes of an eagle which have at least 5 different colour detectors, or to the eyes of a bee with their ulta-violet, blue and green sensors, or even to the eyes of a dog with their two colour sensors.

Each cone of a bird or reptile contains a coloured oil droplet; these no longer exist in mammals. The droplets, which contain high concentrations of carotenoids, are placed so that light passes through them before reaching the visual pigment. They act as filters, removing some wavelengths and narrowing the absorption spectra of the pigments. This reduces the response overlap between pigments and increases the number of colours that a bird can discern.[23] Six types of cone oil droplets have been identified; five of these have carotenoid mixtures that absorb at different wavelengths and intensities, and the sixth type has no pigments.[28] The cone pigments with the lowest maximal absorption peak, including those that are UV-sensitive, possess the ‘clear’ or ‘transparent’ type of oil droplets with little spectral tuning effect.[29]

wikipedia on bird vision -light perception

This means when a pet watches a TV, phone or tablet screen, what is displayed will at best be equivalent to a recoloured image with many of the colours wrong, and at worst incomprehensible as some colours that would appear different to us, could be become the same when seen by a pet if the brightness of the colours is similar. Certainly, very hopeful to be sending colour photos on voyager, which is why almost all that were sent were sent as black and white.

Printed and Painted Colours.

So far, this page has discussed colours generated by light sources, like televisions, phone and tablet screens or computer monitors. However, our world contains far more example of coloured objects and images with are not light sources, but instead light reflectors. Walk through a supermarket and there is a barrage of printed images. Yet once again, printed images reflect very different light than light reflected by nature. Again, it is all because of the simplicity of those three colour detectors in our eyes.

Screen images are constructed from Red, Green and Blue, or ‘RGB’, but printed images are constructed from Cyan, Magenta, Yellow and black or ‘CMYK’. This is because on screen, lack of any colour is black (no colours) and colour is added, but on printed material, lack of any colour is white (all colours make white) and ink or tint or paint is added to remove colours.

The components of RGB each add one of Red Green and Blue. The components of CMY each remove one of Red Green and Blue:

  • Cyan (Blue and Green) removes only red
  • Magenta (Blue and Red) removes only green
  • Yellow (Green and Red) removes only blue

OK, so why Black in CMYK, when there is no white in RGB? Recall that when we add RGB together there are still frequencies of light, like yellow and violet for example, missing, we still see the result as white. To our eyes, white can be close enough, as long as each of our 3 detectors detect sufficient light, the result is white. A sunny day, a cloudy day, led lights, incandescent lights and candles all produce different mixes of the colours yet can all be seen as white. However when removing light, you must remove everything to produce black. While little bits missing from white can still be seen as white, little bits of colour there instead of true black is not black. Enter black paint or ink, as a way of moping up light will get through even all three filters.

The result is more variable than with light recreated by adding colours, as some yellow paint may reflect genuine yellow, some may reflect just red and green, and some may reflect a mix of green, red and yellow, which would still be seen as our eyes as yellow. Paints can use any number of pigments to produce their colour, and even some printers use more than just CMYK to produce colours. This means that while two images of a rainbow should look the same to us, there is a huge variable possible in how that same image would look under a spectroscope. Printed violet, cyan, yellow, or orange need to contain any light of the corresponding frequencies, and be be a result of mix of the ‘primary’ colours, defined as ‘primary colours purely because of the cones of they human eye. These colours ‘primary’ colours are special to our eyes only!

Colour Trivia.

Cyan, Blue, Indigo, or Violet

There is much confusion as to why the ‘colours of the rainbow’ includes indigo. I have a theory, but in this theory it is not backed by other resources, so do not take this as authoritative. My theory is that despite there being a very specific colour ‘blue’ as blue in RGB, and a specific ‘cyan’ from mixing the G(reen) and B(lue) of RGB, in wider use, both these colours would often be called ‘blue’. Look at ‘rainbow’ from ‘wikipedia’ as show to the left, with colours labelled as : “Red Orange Yellow Green Blue and Violet”. The page omits ‘Indigo’, but which colour would you call blue? The colour adjacent to green, or the colour after than. the colour cyan, or the colour blue?

Now see the colour patch I created as pure RGB representation of Green, Cyan, Blue and then Violet. I can image that in an age before RGB colour schemes, the Cyan, which is close to ‘sky blue’, at least on my screen, being called ‘blue’. Then why not call the next colour ‘Indigo’ to differentiate from the previous ‘blue’. My theory is that in the original naming ‘blue’ was what we now call ‘cyan’, and ‘indigo’ what we now call ‘blue’.

White Balance: Colour is Relative.

Previously it was mentioned that a variety of mixes of colours can all be seen as ‘white’. Light bulbs can have a colour temperature, with higher temperatures for lights with a greater weighting on the blue end of the spectrum. These ‘different whites’ should affect the colour of things that reflect light, as evidenced when taking photos with a camera using the wrong ‘white balance’ setting, however our sight (the combination of eyes and brain) compensates so that things usually appear the same colour to us even under different conditions.

There can be some strange effects as our brains attempt to adjust for different lighting. Try blocking the central rectangle on the image to the right, using a pen or some other object held horizontally across the middle of the image, to the see that both ’tiles’ are the same colour. Perception does not always work as expected! This is because our brain does not just work with the raw data from our eyes.

The Blue Colour Detail Problem.

The area of the eye which reads with greatest acuity is the fovea. Reading and other key tasks require the used of the fovea. The greater density of light detectors at the fovea, does have one limitation, the ability to see blue in detail is reduced. Our brain can extrapolate blue content from the surrounding area, which removes this limitation, except when the detail we are trying to see requires blue sensors. The blue sensors ae required to see detail where the only colour change is blue. Blue text on a white background is a change from white (Red, Green and Blue) to blue, which means a complete change of red and green, but no change in blue. So blue text on a white background is perfectly readable, as the only colour that does not change is colour change we do not see well anyway. Blue on a white background is basically as readable by the fovea as black on a white background. However, for blue on a black background, the reverse is true, and only the blue changes. This means the blue of RGB (see the colour patch above), should not be the colour of text on a black background, nor the background colour for black text. The same applies to any pair of colours whre the only change is blue, such as yellow and white, or magenta and red.

Matter Itself.

What we know vs what we perceive.

We now know that a truly continuous surface in nature is a much a limitation of perception as is a truly continuously ‘surface’ is on our screens. While what appears to us a continuous area of, for example, the colour yellow, is not continuous at all, but dots (or pixels) of red and green. Nature has its own ‘dots’: atoms and molecules. The perception of a continuous plane is again not revealing the full story. Very much like the allegory of the cave, the world we can perceive does not reveal what causes what we can perceive.

Matter And Touch

We detect matter though both touch and sight. Touch allows sensing the interaction of atoms and molecules with each other. Through touch we can detect friction and other properties, and these properties are those useful to us in ‘the cave’, but insufficient information to move beyond the cave.

Matter and Sight.

Light, is made of photons. Being smaller than atoms and molecules, light can reveal far more about the nature of matter. Visible light, the frequencies our eyes detect, can go through glass, air, water and other material like Perspex and clear plastics. Each of the materials could be regarded as ‘translucent’, as there is almost nothing that allows light through perfectly. We can regard glass of less than 10mm as effectively transparent, and air to be effectively transparent for hundreds of meters. When air is as thick as the earths atmosphere, there is significant scattering of blue light (Rayleigh scattering), which makes the sky blue during the day, but does still allow a reasonable, although still imperfect, view through to the stars of an evening.

However, when compared to radio waves, which can travel through walls virtually unimpeded, visible light is blocked by almost everything. We can only see a very small portion of the overall spectrum, and that portion travels through almost nothing, creating the impression that matter is more substantial than the reality. With sound we can hear from around 20 Hz, through to over 10,000 Hz, which means over the highest frequency is over 500 times the lowest frequency. With sight, the highest frequency we can see is only around twice the frequency of the lowest we can see. If we could see more frequencies, we much more about the nature of matter would be clear to us. Through technology, we can see far more frequencies, and have learnt much more, but it is difficult the break the shackles of what our own senses reveal and ‘grok‘ that matter is so empty.

Extended Perception

Telescopes And Our Place in the Universe.

As far back as Pythagoras, before 500BCE, there were already people who believed the Earth to be a sphere. Almost 300 years later, Eratosthenes calculated the circumference of the Earth with reasonable accuracy, an at a similar time Aristarchus hypothesised that the Earth and other planets orbited the Sun, but it was not until the invention of the telescope by Galileo almost 1600 years later that we could make observations to confirm such theories.

Theories of the universe tend to lead our ability to physically perceive real evidence.

As early as the eighteenth century, the philosopher Immanuel Kant (1724–1804) suggested that some of the nebulae might be distant systems of stars (other Milky Ways), but the evidence to support this suggestion was beyond the capabilities of the telescopes of that time.

In 1887 that the first image of the closest galaxy other than our own, Andromeda, was captured by Isaac Roberts , but it was later in 1923 that Edwin Hubble measured the distance Andromeda that these is was established that these ‘nebulae’ (clouds), first identified by Charles Messier back in the 17th Century, were at least in most cases, galaxies.

Spectroscopy, Hubble and the Expanding Universe.

Sometimes it is not the combination of the old and the new that makes the breakthrough. Spectroscopes have been around since Isaac Newton, although they have improved over time. It was the combination of measurements of the ‘red shift’ of stars using telescopes and spectroscopy by Vesto Slipher, in combination with the measurements of distance from observed brightness of Cepheid variable stars  by Edwin Hubble in 1929, that revealed that the universe is expanding.

Radio Telescopes

First introduced in 1939, following the discovery in 1932 by Karl Guthe Jansky that stars generate radio signals, Radio Telescopes have since revolutionised our ability to learn about the universe as radio waves penetrate matter better than light waves and as a consequence as less effected by having travelled enormous distances through the universe.


Quarks are widely recognized today as being among the elementary particles of which matter is composed. The key evidence for their existence came from a series of inelastic electron-nucleon scattering experiments conducted between 1967 and 1973 at the Stanford Linear Accelerator Center. Other theoretical and experimental advances of the 1970s confirmed this discovery, leading to the present standard model of elementary particle physics.

The Discovery of Quarks
Fundamental Particles: As in Theory of (almost) everything.

Without colliders, we would not understand quarks or the other fundamental particles the entire universe it built from. There is a joke that ‘you should not trust atoms, because the makeup everything!’. While it turns out that it is not quite true that atoms make up everything, atoms themselves are made up almost entirely (by mass at least) of quarks. Protons are two ‘up’ quarks and a down quark, neutrons are two ‘down’ and one ‘up’. Without colliders, from the large hadron collider to many smaller colliders, we would have little beyond guesses as to how the fundamental particles interact to create our universe.

The famous large hadron collider, collides … hadrons. Hadrons are particles made of two or more quarks. Since we could not confirm there even were quarks to form hadrons without experiments from colliders, it follows that the large hardon collider was far from the first collider. The cyclotron type of particle accelerator was invented in 1929 at Berkeley, but true colliders that not only accelerated beams of particles but also collided those beams did were not created until the 1960s.

Matter and Antimatter

The theory that their must be an anti-matter ‘mirror twin’ for each particle was first proposed by Paul Dirac in 1928 and won him the Nobel prize in 1933. The theory, still held as valid today, is that:

In particle physics, every type of particle is associated with an antiparticle with the same mass but with opposite physical charges (such as electric charge). For example, the antiparticle of the electron is the antielectron (which is often referred to as positron). While the electron has a negative electric charge, the positron has a positive electric charge, and is produced naturally in certain types of radioactive decay. The opposite is also true: the antiparticle of the positron is the electron.

Wikipedia: Antiparticle

Further, it is thought our entire universe is the result of events that generated matter and antimatter and what we have now is the result of an imbalance when the matter and antimatter cancelled out. The mystery to physics is not the creation of matter and antimatter out of nothing, but why there was matter left over when the matter and antimatter cancelled out:

The Big Bang should have created equal amounts of matter and antimatter in the early universe. But today, everything we see from the smallest life forms on Earth to the largest stellar objects is made almost entirely of matter. Comparatively, there is not much antimatter to be found. Something must have happened to tip the balance. One of the greatest challenges in physics is to figure out what happened to the antimatter, or why we see an asymmetry between matter and antimatter.

CERN: The matter-antimatter asymmetry problem

Quantum physics also predicts that matter and antimatter particles are spontaneously created in pairs in empty space, only for the pairs to then annihilate each other resulting in a return to empty space. This is even quoted by Steven Hawking when explaining Hawking Radiation:

Hawking’s insight was based on a phenomenon of quantum physics known as virtual particles, and their behaviour near the event horizon. Even in empty space, subatomic “virtual” particles and antiparticles come briefly into existence, then mutually annihilate and vanish again.

Hawking Radiation: Overview

The existence of anti-particles is even key to the Sun providing us with energy. The sun fuses Hydrogen into Helium. At the temperatures of the Sun, Electrons do not remain in orbit around a specific nucleus, and the result is a plasma of Protons and Electrons. Note that neutrons, an essential ingredient of Helium, are absent. The neutrons to create Helium are the result of high energy proton-proton collisions which ‘flip’ a down quark into an up quark, producing a proton, a neutron, and a positron. The flipping of quark to produce a neutron also produces the positron, which is the anti-matter partner of an electron. Anti-matter and matter annihilate each other, so the position only exists until it finds an electron. The means two protons have combine to form a nucleus one proton and one neutron, in this first step of fusion. The positive charge can from the proton that became a neutron escapes by way of a positron or ‘anti-electron’, which then annihilates together an electron, and by thus maintains the balance of protons and electrons. For each proton that becomes a neutron, their must be one less electron.

But how can we ‘observe‘ antimatter?

PET (Positron Emission Tomography) is a nuclear medicine procedure which takes advantage of the unique signal produced by the annihilation of a positron and an electron yielding two photons of 511 ke V traveling in nearly opposite directions. If the two photons are detected in coincidence, the origin of the event is known to lie within a well define cylinder between the detectors.

A Brief History of Positron Emission Tomography

The cyclotron, invented in 1929, was able to generate positrons from particle collisions and these could be detected by the signature gamma radiation almost as soon it was realised there would be positrons. The generation of positrons plays a key role in radioactive decay, and as a result PET detection of positrons has become a key part of medicine by enabling tracing the path of ‘markers’ through the human body.

However, discovery continues. It was only very recently that we could detect positrons in solar flares from the sun.

Neutrinos and Neutrino Detection

Three forms of neutrino appear on the ‘standard model’ of particle physics. While radio waves can pass through far more than visible light photons can, neutrinos take passing through matter to a whole new level.

The particle called the neutrino was conceived in 1930 by the Austrian-Swiss theoretical physicist Wolfgang Pauli (1900–1958) as a possible solution to two vexing problems confronting a widely accepted model of the structure of the atomic nucleus

Discovery of Neutrino

Neutrinos are tiny. Compared to protons and neutrons (hadrons), electrons are tiny, being around 2,000 times smaller. But neutrinos are tiny compared to electrons. Around 2,000 times smaller than an electron, and 0.000025% of mass of a proton or neutron.

Despite their incredibly low mass, the lowest mass of anything besides a photon, neutrinos still seem to make up at as much as least 1/12th of the (non-‘dark’) matter in the universe. This requires a lot of neutrinos.

The shining sun sends 65 billion neutrinos per second per square centimetre to Earth. Neutrinos are the second most abundant particle in the universe. If we were to take a snapshot, we’d see that every cubic centimetre has approximately 1,000 photons and 300 neutrinos.

Fermilab: Nine weird facts about neutrinos

Neutrinos are almost impossible to detect. Despite all those neutrinos, On average, only one neutrino from the sun will interact with a person’s body during their lifetime. Neutrinos just pass through matter, and almost never interact. On the surface of the Earth, it would be practically impossible to detect that rare interaction with a neutrino with all the noise from photons and other particles, but deep underground, when all the photos have already been blocked, the neutrinos keep passing through. Neutrinos detectors are placed as far a 1km underground and are huge, but we can manage to detect them. Of all the artificial sensors humans have managed, a way of ‘seeing’ neutrinos has been one of the biggest challenges. Despite this, we have found ever improving ways to detect neutrinos since 1956.

Gravitational Waves

Another recent addition to our arsenal of man made sensors have been gravitational wave detectors. See Gravitational Wave Detectors: How They Work. Only operable for the first time in 2016, this is the most recent addition to our sensors.

What we cannot yet ‘sense’ or even detect

Dark Matter

Computations of how much ‘stuff’ must exist based on the observations of gravity show that we can only see around 20% of all ‘stuff’. All those particles on the standard model, constitute around only 20% of matter, the the rest, the other around 80%, we call ‘dark matter. Dark because we can’t ‘see’ or detect it with any sensor we were born with or have created.

Perhaps there is just something a bit like a neutrino in that rarely interacts with other matter, only even more rarely. We can only just barely detect neutrinos, so if there was a type of particle that interacted even less often, it could explain everything. There are many theories that dark matter could indeed be some new kind of neutrino, or at least some of dark matter could be a new type of neutrino. If reading this some time after 2020, a new search for ‘neutrino dark matter’ may reveal more.

Note that there would need to be a huge number of this new type of particle. The 1/5 of matter we do know about, comes from around 17 fundamental particles. Why not 4x as many particles for the 4x as much mass we cannot detect? Just because we know of no interaction beyond gravity between us an ‘dark matter’, does not prevent their being different types of particles of dark matter that interact with each other. Potentially an entire universe existing right where we are without us even being aware!

Dark Energy?

Dark energy is taking the concept of ‘unknown=dark’ to a whole new level. I relative terms, we know a lot about dark energy compared to dark matter. To some extent, both dark matter and dark energy started out like negative numbers: if you allow them in equations, then the maths work out. That does not mean they are not real as for example anti-matter started out that way.

While with dark matter, the theory has also let to predictions that have proved to make sense, we are really still at square one with dark energy. The universe is not only expanding, the rate of expansion is increasing. The maths says gravity should be a force to pull everything together, and the if the universe is expanding every faster rather than slowing in expansion, ‘dark energy’ is one way of making the maths work. The best way of making the maths work so far.

To make the maths work, you need a lot of dark energy. More than twice as much dark energy as there is everything other than dark energy. Once dark matter is also considered, the result would be that our universe as we know it, is only 5% of reality.

Dark energy is the lead explanation, but perhaps there is an other explanation for the accelerating expansion, such as ‘gravity does not actually work as we expect’, (e.g. Is Dark Energy Really “Repulsive Gravity”? and other theories) or something. But whatever the explanation, what is certain is that there is something very fundamental to how he universe works that we currently do not understand.


The world is comprised of far more than our senses perceive. I say ‘the world’, because it is not just objects at a great distance from us that we cannot detect, but at least around 80% of what is right here on Earth that we cannot detect. There is no reason to believe dark energy is not all around us too, meaning we live in a world where we can sense no more than 5% of what is happening. This put us already like we are trapped in a cave seeing only the shadows on the wall of the world in which we live.

Then consider the clues our senses provide to that which we do ‘know’. Matter that we perceive as ‘solid’ is in fact, mostly empty, there is actually no such thing as mathematical plane in nature as everything is made of ‘dots’, and even the images we create of the world rely on our limited senses to such an extend that they would not reflect reality even to the slightly enhanced sensors of a bird.

The universe as we know it is a construct of our ability to perceive.

Material to be integrated.

Our world is a construction of our own brain.

I see a picture on the television screen with a continuous area of yellow. Looking closely tells me this area is made of ‘dots’ or pixels, but my brain does not see pixels, but a continuous area. In fact the pixels are not even yellow, but red and green. But my brain constructs and image of a continuous area of yellow.

There many other examples of where we know what our brain constructs does not match reality. Click through to youttube if you wish to see more.

In fact the whole world as constructed in our brain is very different from reality.  Just as the picture on television is made of dots and not continous areas, the whole universe is made of atoms and molecules which are mostly empty and there are no continuous areas in the realtiy either.

So how does this effect love?

The point is that the entire world that we percieve does not necessarily match reality. The universe we percieve is a construction of our own brain from the information creacted from our senses and memories.  There is no reason to assume our perceptions of other people are actually perfect matches to those other people either.  Can one persons brain possibly hold a complete picture of another persons thoughts?  My suggestion is that everyone we think we know, effectively exists as an avatar of the actual person within our brain, and this avatar is continually updated as more information is percevied.  But that avatar can never be an exact match for the real person, which is why people will always have the possibility of surprising us.

So who do we fall in love with?  The image of another person we hold in our own mind, or the actual other person who will in some details not match the image we hold in our brain?

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Blog at

Up ↑

%d bloggers like this: