I wrote a piece called The future of photos isn’t cameras where I talked about how with lidar and other newly available consumer technologies, we will not capture photos/video as we do today, but instead our devices will take a 3D scan of our surroundings, sample the colors, textures, etc., and then will render it all back to us on the fly later when we want to look at our pictures.

This means that when revisiting a moment, you won’t just have it from that one single angle or POV, you will be able to re-experience it in an immersive way that will make the media of today look quaint.

If you are wearing AR glasses, you will be able to feel as if you were really there. You could turn your head, maybe walk around a bit, and the scene would readjust itself to give the illusion of you moving through that space at that time.

Today, a big part of why we take so many photos and videos is to share them with family and friends. In an AR future, when you share a photo with someone, they will also be able to become immersed in that same way.

Let’s say I went to Niagara Falls. On the boat that goes right under the waterfall, I take a video– the water crashing down and bouncing off the deck, the overwhelming cacophony of sounds: the waterfall itself, the boat engine, the people around me laughing and screaming. There are different details and sights to be discovered at every turn of my head, or as I walk from one side of the boat to the other.

If that was all recorded with multiple cameras, sensors, and microphones, I could later go back and re-live it. And again, not only from the perspective of how I had been situated the first time, but with the option to explore it in a different way each time, to move more freely in the space.

Now I send that file to you, which is relatively small as most of the information is actually coordinate data. You put on your glasses and headphones, and now you can be on that same boat in that same moment.

But here’s where it gets fun– we could sync up and do it at the same time. Physically apart, but “watching” together, overlaying our own live audio with the recording so we can talk to each other. We are both re-living the Niagara Falls boat simultaneously and able to discuss it as if we were really there together. (Another time we can talk about placing avatars in the scene so when I turn to my left, I see you, and when you turn to you right, you see me.)

The last piece to think about here is that there is no reason this has to be pre-recorded– it could be happening live. I am on the boat under the waterfall right now, and you, far away, are experiencing it with me in all its glory, in realtime. But even then, you are not constrained to my point of view. I might be looking to the left on the boat, and you might choose to turn your head right to see something else. I’m there physically, and you virtually, but we are sharing the moment together nonetheless. “Remember that time we went to Niagara Falls?”

So many of today’s technologies are early prototypes of this eventual shared future. When you combine computational photography, AR, and social, as each gets better, and you draw those trend lines out until they intersect, you can start to see what the future will look like.