Skip to content

A virtual reality demo for simulating low vision from Humphrey Field Analyzer (HVFA) Data

License

Notifications You must be signed in to change notification settings

adhocdown/low-vision-sim

Repository files navigation

Immersive Low Vision Simulation

Visual impairments interfere with normal, daily activities such as driving, reading, and walking. By simulating impairments we may better understand how those afflicted perceive and interact with their environment. Virtual reality (VR) provides a unique opportunity for normally sighted people to experience visual impairments in first person. Accordingly, we have created an immersive simulation that maps patient data from a Humphrey Visual Field Analyzer (HVFA) to the field of view of a head-mounted display. I developed this simulation with Unity 2017.3.0f3, C#, Python, and Cg/HLSL. This version of the simulation contains no actual patient data to ensure privacy and anonymity. Data preprocessing is performed in Python (See: LowVisionDataProcessing folder) with the exception of the visual field matching, which is performed within Unity (See: LowVisionProject folder).

Code Notes: Unity: I need to tidy the code and add options for more current displays. I also generally need to explain processing in more detail through comments. Python: Added Python scripts folder. The current python scripts are sloppy and involve a lot of manual typing to read/write files.

GitHub Notes: See link for how to structure license and copyright disclaimers when repo is made public: https://github.com/guillaume-chevalier/LSTM-Human-Activity-Recognition

Visual Impairments

An estimated 2.9 million Americans are diagnosed with low vision, in which vision is impaired to the degree that it cannot be corrected with glasses alone [1]. The visual impairments expressed by low vision conditions are heterogenous and may therefore vary widely between conditions and even between patients with the same condition. There are several types of visual field loss to consider for simulation:

  1. Scotoma — a partial loss of vision or a blind spot, surrounded by normal vision (e.g. diabetic retinopathy)
  2. Central Scotoma — loss of central vision (e.g. age-related macular degeneration, optic neuropathy)
  3. Peripheral Scotoma — loss of peripheral vision (e.g. glaucoma, retinal detachment)
  4. Hemianopic Scotoma (Hemianopia) — binocular vision loss in each eye’s hemifield (e.g. optic nerve damage)
    1. Homonymous — loss of half of vision on the same side in both eyes (left or right)
    2. Heteronymous — loss of half of vision on different sides in both eyes (binasal or bitemporal)

Low vision conditions illustrated

Data Processing

The Humphrey Visual Field Analyzer (HVFA) measures retinal sensitivity at specific points in a patient’s field of view. Our simulation processes raw values from the 30-2 protocol, which measures 30° temporally and nasally with 76 points. Data processing was conducted in Python.

Example: 30-2 protocol HVFA data for the right eye [2]
Example: 30-2 protocol HVFA data for the right eye [2].

Retinal Sensitivity Conversion

Retinal sensitivity values, measured in decibels, are converted to linear scale and normalized in the range of [0-1]. Sensx corresponds to the current sensitivity value and SensV is the maximum sensitivity value.

Formula for Retinal Sensitivity

Interpolation

The data points are plotted with gaussian interpolation in order to produce an intensity map.

Normalized sensitivity values and corresponding intensity map

Visual Field Match

The intensity map is scaled to match the field of view (FOV) and pixel density of the head-mounted display. Edge values of the map are extended outward to complete the periphery.

Mapping of the HVFA data to the Oculus Rift CV1 pixel resolution and FOV.

The simulation uses the known screen dimensions and approximate fields of view of stereoscopic display devices to map patient data to left and right eye screen shaders. The simulation explicitly supports the following VR head-mounted displays:

  • Oculus Rift CV1
  • Oculus Gear ft. Samsung Galaxy S7
  • Oculus Rift DK2
  • HTC Vive

Otherwise, the simulation assumes 960 x 1080 screen dimension and 94 x 104 degree FOV, which matches the specs of the Oculus Rift DK2. See PlatformDefines.cs for more.

Graphical Results

The generated map informs two screen shaders, which render directly to the display. Left and right eye information are processed separately.

Graphical results display no scotoma, opacity, and blur fields for left and right eyes

Discussion

We have computationally modeled visual impairments for immersive simulation, which may allow us to better understand both the experience of those with vision loss and vision loss itself. Additionally, the presented work provides an approach to vision loss simulation in which patient data from a perimetry test is leveraged to better represent the heterogeneity of human vision.

While low vision simulations exist, most are based on the general symptoms of eye diseases [3] and are unable to produce the irregular scotomas that individuals experience in reality. At present, there are few empirical evaluations of visual impairment simulations and no simulation has been implemented in real time with eye tracking. We intend to address both of these concerns in future work.

Citation

Copyright (c) 2019 Haley Adams ? Vanderbilt University ? LiVE Lab? Cite my publication, if Bobby ever gives me the green light to publish something with this cries

Stay Connected

References

[1] Prevent Blindness America and National Eye Institute. Vision Problems in the U.S. 2012.

[2] Thomas R & George R. Interpreting automated perimetry. Indian J Ophthalmol. 2001.

[3] Haojie Wu, Daniel Ashmead, Haley Adams, & Bobby Bodenheimer. Simulating Macular Degeneration with Virtual Reality: A Street Crossing Study in a Roundabout Environment. Frontiers in Virtual Environments. 2018.

About

A virtual reality demo for simulating low vision from Humphrey Field Analyzer (HVFA) Data

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published