3 February 2017

Volumetric Display Systems Simulator

"What's... one of those?" I hear you ask.

One of these!

(images clearly downloaded off the 'net)

More specifically, "swept volume displays" are a type of 3D display technology that relies on the "Persistence of Vision", drawing points of light onto a rapidly spinning screen to produce the effect of genuine 3D objects in space.

For my final year project at University, 13 years ago now, I implemented a 3D simulator for swept volume displays. More specifically, it simulated "distortional dead zones" within any simple display configuration, allowing you to explore the best configuration for image quality. Put simply, if the screen is at right-angles to the beam source, the voxel is the perfect shape. The more acute the angle, the more elongated and fuzzier the voxel becomes, reducing image quality.

The interface was originally implemented in Visual Basic, with the calculations done in C, and OpenGL responsible for the rendering.


The original Visual Basic interface
The original Visual Basic interface

The original 3D rendering, using C and OpenGL
 The original 3D rendering, using C and OpenGL


I've learned an awful lot since then, and I was always fond of the project, so I've decided to reimplement it using modern web technologies, finally unifying the interface with the rendering in a fun and intuitive way.

This is a very quick diagram of my architecture design, using JHipster to generate the base applications:



Implementing the simulation code as a microservice makes sense, as it'd be easy to spin up more if load becomes a problem (as it's not an instant calculation). It also makes it easier to make changes to the simulator separate to the interface, improving performance or accuracy, etc. And vice-versa. It's nicely encapsulated.

Thankfully, I still have my original code to base it off, so I don't have to reimplement everything from scratch.

Once I have a nicely designed and working solution, I can then add on the extra features I've always wanted to implement:

  • Much more intuitive interface, with real-time updating
  • Far more flexible configuration options, including beam source numbers
  • "Collision" between beam sources (meaning you can't place a beam source in the same physical space as another one)
  • Different screen shapes (currently only vertical rectangular screens are supported)
  • Saving and loading configurations
  • Simulated object rendering, with accurate simulated distortion
  • Improved graphics!


I've already made a bit of progress, having just started the project. I now have an almost-working "Simulator" microservice, which is sent a display configuration API request from the API gateway, responding with a 3D array of voxel sizes. I also have a simple 3D representation of this grid, with mouse rotation and zoom (which already works much better than my original project, which suffered from gimbal lock):

WIP - obviously missing a lot of interface configuration and styling
WIP - obviously missing a lot of interface configuration and styling

There's clearly a bug somewhere in my Simulator microservice, as the grid isn't correct - it should be a similar shape to the previous example. I'll figure it out.

I'm quite excited about this project. I'll keep you updated with its progress!

No comments:

Post a Comment