Great stuff! I'm a little frazzled by the fact that you're leaving 350$ cameras in the woods unattended, though. Figures they're not too far from your home given the CAT5 length you mention, but for anyone interested in building one in a more remote location (IoT SIMs with decent data plans are a thing now!), you might want to consider camouflage, and/or setting up in a tree.
You'd be surprised how much untouched wilderness there is out there. And generally opportunistic criminals don't tend to be wandering through the wilderness looking for cameras to steal. And even then they're face will be recorded for all they know.
Maybe if you go in circles. If you cover 20 miles/day, even in places like Alberta or Wyoming, it's hard to go much more than two days without bumping into another one in one form or fashion.
On the trails, sure, but a short way off of even the busiest trails you could go an extremely long time without seeing anyone, especially in areas that don't allow hunting.
Yeah - we used to have a few of those unfortunate events reported every year when I was growing up in Wyoming. Sometimes even when it wasn't cold (dehydration is a thing, apparently). As young kids, we didn't know anything about anything, and always wondered, "why didn't they walk in a straight line? they were never more than 75 miles away from help."
It's really hard to walk in a straight line in dense woods. A compass helps, but even then you have to know how to use it. It's usually best to find a stream, follow it down to a river, and then follow that down to some sort of settlement (hopefully).
I worked on a revamp of the system for controlling those cameras remotely, viewing all your pictures, etc... (the cameras had cell modems) at my last job.
Its insane. Not only are the cameras a little pricey (not crazy), but HOLY SHIT the data plans are expensive. Granted, that's where they make money, but still.
What do you mean by decent data plans? I'm seeing plans that are between $10 and $20 usd per GiB. That's definitely not good enough of live video, but I suppose you could release new video every month.
It looks like it's just a single 360 camera with one viewpoint so it isn't really a "true immersive experience" on VR headsets, it's just a 2D image projected on a sphere. For true immersion in VR you need stereoscopic 360 degree recording, which can be achieved with using two of the 360 cameras or with this mount for GoPros:
Surely if you have two 360° cameras, one of them will be capturing the other one. I would think you'd need one 360° camera rig with stereo pairs of cameras.
With just two cameras, not only would you capture the other camera in view, but in those co-linear directions I don't believe it would be possible to get any depth information (or it would be all weird, like the same view, but magnified in one eye).
See: Lytro, which has struggled to find a good use for its lightfield cameras, but might actually have something now. Lightfield cameras can extract depth information directly, which lets them synthesize stereo views with a single camera.
Duct-tape enough of them together and you can synthesize stereo views in every direction, which is pretty cool.
Looks like something is working, the browser is receiving an image and it's not a solid black colour, turning up the contrast to the max you can see some difference:
Does anyone else find it weird that you click and drag in the opposite direction you (well I) would expect, considering that the cursor becomes a hand that grabs?
I don't have a cardboard to properly see if the image really is "3D", but it seems like it's just a photosphere and therefore doesn't really have depth information. Is that right?
Now I have the nightmares again. (I got to implement spherical projections in realtime on a Pentium 90. It's... an interesting problem. Yeah. That's what I'll call it)
Yeah, and there are some other in-dev camera rigs that use multiple cameras plus software interpolation to simulate 3d as well.
The thing that's most intriguing to me are the experiments using multiple depth cameras set up around a space and using software to build a live, 3d model and overlay the video data as a texture on top of the models. It's all very rudimentary and low-res at the moment but it's the sort of thing that can eventually become 3d/VR telepresence and that just strikes me as awesome.
I'm very new to this sort of thing, but it seems like "stereoscopic video" isn't nearly as common as I expected it to be. 2D panoramas seem to be very much the norm. I'm not sure if this is down to production difficulty, delivery difficult or stereoscopy just not being impressive enough.
One thing is that it seems to me that anything that's recorded by a camera (rather than rendered in realtime) is going to be "wrong" once you tilt your head, as your eyes are now on top of each other rather than next to each other.
Yup. The Ricoh Theta S captures spherical panoramas. The VR terminology is just overselling it. Neat idea though. It would be cool if there was a sonic component.
I think you're on to something. Say we record monaural audio with directional mics on/beside each cam, then encode and compress each stream, allowing for realtime stereo mixing during playback determined by view angle. Add a compass, accelerometer, gyro to track orientation. Couldn't we then achieve the desired effect and even simulate spatial audio effects, 6DoF movement in scene, blend UI sounds and add 3D sound to the environment? AR anyone? With a small peripheral you could emit a few chirps at diff frequencies and measure them using same mic rig to create a virtual map of the environment's acoustic characteristics and use it to render sound effects for composite elements, generated UI, nav feedback, similar to the way image based lighting is used today to make artificially generated objects appear as if they were really present in the scene.
Sounds like a good open hardware/software project but I'm short on cameras and mics for something like that. Anyone see potential there?
Combine with laser rangers and filters for their wavelength on the cams, and you can sample 3d point cloud data too and render the environment as a 3D (4D) scene, use it for composite reference, or slap a small LiDAR scanner under the whole thing for precise measurement.
Pretty cool. Make it live and add some sound and you could almost feel like you're there.
Simple nitpick: when I drag to the left, I expect to go to the right. (I'm just seeing this through my browser though, maybe it makes sense in VR, idk.)
There is no consensus on drag direction. I've built several projects and have alternated the drag direction on each and have always received vociferous complaints about both.
Making it a preference means creating a settings UI. People have their preference, but they are also adaptable. It's not immediately clear that a "VR" experience (not that I'd call 360° photo "VR") would benefit from a preferences UI.
If there is no consensus on drag direction, Littlstar's player should be the consensus. It gives a major wedgie to every other player in the space, at the moment. Especially when it comes to motion and drag.
Drag direction in 3D depends on user background and kind of application. It depends on if you expect to be moving the world or a virtual character. In the case of a panorama viewer I understand that some people might expect to move the picture (the world) as you do on a phone photo gallery or map application. For a first person interactive application it's more likely to expect that the character will turn to the direction you are dragging the mouse. You're controlling the character and not the world.
Doesn't really affect VR, VR is absolute. Those are just for desktop controls...sort of up to the developer which way they want it to drag, but the default is left/left...may have to update that since I do hear your preference is more common.
Doesn't seem to work for me on my HTC Vive but I've never used any A-Frame site before so I might just be doing it wrong. I tried on both Chrome and Firefox.
For starters, you need an experimental build of Chromium that Google provides in a ZIP file, or you need Firefox Nightly, and in both cases you have to enable a flag.
After that, I do not know what the current status is for A-Frame supporting the Vive. And of course, there was recently a big change to the WebVR API, so A-Frame may not be caught up to that, either.
As far as I know, A-Frame's "known good configuration" is Oculus Rift + Firefox Nightly, or (HTC Vive|Oculus Rift) + not-latest Experimental Chromium.
And Windows 10, of course, because Facebook can't find a single person capable of doing graphics work on Linux out of 15,000 employees, or so they would have us believe.
> because Facebook can't find a single person capable of doing graphics work on Linux out of 15,000 employees
Either you muck around with NVidia binary Linux drivers (or worse, binary drivers for some ARM chipset in an Android phone), or you're working with Intel stuff (which isn't going to give you a cutting-edge VR experience).
A for-profit corporation has better things to spend its 20% time on.