
Image from ETA UK https://www.eta.co.uk/2012/12/17/a-cycle-helmet-that-gives-you-eyes-in-the-back-of-your-head/
FlyVIZ, is described in a 2012 paper by ESIEA – INSA group (consisting Jérôme Ardouin, Anatole Lécuyer, Maud Marchal, Clément Riant, Eric Marchand) as ‘A Novel Display Device to Provide Humans with 360° Vision by Coupling Catadioptric Camera with HMD [Head Mounted Display]’.
It is a novel application of Google-Street-View-like tech to live human vision.
The device consists of a catadioptric sensor – a mast camera receiving an image from a 360 deg. reflector (which can be parabolic, hyperbolic or spherical) mounted above the upwards-facing lens. Currently in prototype, the mast protrudes from the crown of a helmet under which is a wrap-around visor providing vision from an image processor. In production the camera may be incorporated into the helmet, along with the processor which converts the image through cylindrical projection (the opposite of ‘Polar Coordinates’ in Photoshop).
This concept, say the authors, extends on Wafaa Bilal‘s backwards facing camera surgically mounted to the back of his head (although in their case with defensive rather than paranoiac intent), providing rear vision that augments the wearer’s awareness. Adapting to the 360 degree x 110 deg. vision offered by FlyVIZ is reported to require 15 minutes (apparently not accompanied by stomach-churning disorientation), after which it is possible to drive a vehicle (a boon to cyclists, who are already wearing cameras in self-defense) and to dodge or fend off objects thrown from behind the wearer.
The panoptic concept is not new to photographers, who have been employing panoramic cameras since Austrian chemist and associate Wenzel Prokesch submitted a patent for the ‘Ellipsen Daguerreotype’, and fanatically ‘stitching’ since Adobe included Photomerge automation in Photoshop CS. The peripheral expansion this would offer to live vision might inspire some new understandings of human spatial awareness (and the potential to overwhelm us with twice the incoming visual data – overwhelmingly more if Google’s Project Glass incorporates this!).