ARVO Meeting Abstracts
 QUICK SEARCH:   [advanced]


This Article
Right arrow Email this article to a friend
Right arrow Similar articles in this journal
Right arrow Alert me to new issues of the journal
Right arrow Download to citation manager
Citing Articles
Right arrow Citing Articles via Google Scholar
Google Scholar
Right arrow Articles by Chiu, S.
Right arrow Articles by Farsiu, S.
Right arrow Search for Related Content
Right arrow Articles by Chiu, S.
Right arrow Articles by Farsiu, S.
Invest Ophthalmol Vis Sci 2013;54: E-Abstract 5523.
© 2013 ARVO


Automatic Segmentation of Photoreceptors in AOSLO Images Using Graph Theory and Dynamic Programming

Stephanie Chiu1, Adam Dubis2, Alfredo Dubra3,4, Joseph Carroll3,4, Joseph Izatt1,2 and Sina Farsiu2,1

1 Biomedical Engineering, Duke University, Durham, NC
2 Ophthalmology, Duke University, Durham, NC
3 Ophthalmology, Medical College of Wisconsin, Milwaukee, WI
4 Biophysics, Medical College of Wisconsin, Milwaukee, WI

Commercial Relationships: Stephanie Chiu, Duke University (P); Adam Dubis, None; Alfredo Dubra, US Patent No: 8,226,236 (P); Joseph Carroll, Imagine Eyes, Inc. (S); Joseph Izatt, Bioptigen, Inc. (I), Bioptigen, Inc. (P), Bioptigen, Inc. (S); Sina Farsiu, Duke University (P)

Support: None


Purpose:The adaptive optics scanning light ophthalmoscope (AOSLO) has been a key instrument for analyzing the photoreceptor mosaic and revealing subclinical ocular pathologies. However, manual identification of photoreceptors is subjective and labor intensive. In this work, we have developed an algorithm to automatically segment and identify cone photoreceptors in AOSLO images and validated its performance against a state-of-the-art algorithm.

Methods:We extended our segmentation framework based on graph theory and dynamic programming (GTDP) to segment cone photoreceptors [1]. We used local maxima operations to obtain pilot cone location estimates and transformed each cone into the quasi-polar domain to segment and more precisely locate each cone. To validate our algorithm, we compared our GTDP algorithm to: 1) the fully automatic Garrioch implementation of the Li & Roorda method [2], and 2) the semi-automatic method from [2], where any missed cones from the Garrioch method were added manually to create the gold standard. We utilized the same dataset as in [2], which consisted of 10 repeated images in 4 parafoveal locations captured across 21 patients (840 images total). Ref: (1) Chiu et al, BOE, Vol 3, 2012. (2) Garrioch et al, Optom Vis Sci, Vol 89, 2012.

Results:Individual cones located by our GTDP method were matched with the gold standard and compared to the Garrioch method, as shown in Table 1 and Figure 1, indicating that the proposed GTDP method improved the cone detection rate of the Garrioch method (1.7 vs. 5.5% miss rate).

Conclusions:The GTDP method proposed here was able to achieve a higher detection rate compared to the state-of-the-art technique. Overall, these results are highly encouraging for reducing the time and manpower required to identify cones in ophthalmic studies.

Figure 01
View larger version (11K):
[in this window]
[in a new window]

Table 1. Cone identification performance of fully automatic methods compared to the gold standard across all 840 images. Correctly identified cones were detected in both the fully automatic and gold standard techniques, missed cones were only present in the gold standard, and additional cones were not present in the gold standard result.


Figure 02
View larger version (59K):
[in this window]
[in a new window]

Figure 1. Mean performance of the fully automatic cone identification algorithms. a) Original AOSLO image of the cone mosaic in log scale. b) Garrioch result (yellow: correctly identified; green: missed). c) GTDP result (magenta: correctly identified; green: missed; blue: added).


Keywords: 549 image processing • 648 photoreceptors

© 2013, The Association for Research in Vision and Ophthalmology, Inc., all rights reserved. Permission to republish any abstract or part of an abstract in any form must be obtained in writing from the ARVO Office prior to publication.