metricas
covid
Buscar en
Clinics
Toda la web
Inicio Clinics Evaluation of machine learning classifiers in keratoconus detection from orbscan...
Journal Information
Vol. 65. Issue 12.
Pages 1223-1228 (January 2010)
Share
Share
Download PDF
More article options
Visits
836
Vol. 65. Issue 12.
Pages 1223-1228 (January 2010)
CLINICAL SCIENCE
Open Access
Evaluation of machine learning classifiers in keratoconus detection from orbscan II examinations
Visits
836
Murilo Barreto Souza
Corresponding author
murilobsouza@gmail.com

Tel.: 55 71 3203-3466
, Fabricio Witzel Medeiros, Danilo Barreto Souza, Renato Garcia, Milton Ruiz Alves
Faculdade de Medicina da Universidade de São Paulo, Ophthalmology, São Paulo, São Paulo, Brazil
This item has received

Under a Creative Commons license
Article information
Abstract
Full Text
Bibliography
Download PDF
Statistics
Figures (5)
Show moreShow less
PURPOSE:

To evaluate the performance of support vector machine, multi-layer perceptron and radial basis function neural network as auxiliary tools to identify keratoconus from Orbscan II maps.

METHODS:

A total of 318 maps were selected and classified into four categories: normal (n = 172), astigmatism (n = 89), keratoconus (n = 46) and photorefractive keratectomy (n = 11). For each map, 11 attributes were obtained or calculated from data provided by the Orbscan II. Ten-fold cross-validation was used to train and test the classifiers. Besides accuracy, sensitivity and specificity, receiver operating characteristic (ROC) curves for each classifier were generated, and the areas under the curves were calculated.

RESULTS:

The three selected classifiers provided a good performance, and there were no differences between their performances. The area under the ROC curve of the support vector machine, multi-layer perceptron and radial basis function neural network were significantly larger than those for all individual Orbscan II attributes evaluated (p<0.05).

CONCLUSION:

Overall, the results suggest that using a support vector machine, multi-layer perceptron classifiers and radial basis function neural network, these classifiers, trained on Orbscan II data, could represent useful techniques for keratoconus detection.

KEYWORDS:
Neural networks
Artificial intelligence
Clinical decision support systems
Corneal topography
Diagnosis
Full Text
INTRODUCTION

Keratoconus (KC) is a bilateral and non-inflammatory condition characterized by progressive thinning, protrusion and scarring of the cornea.1 The disease usually becomes clinically evident at puberty, and its etiology remains unknown.2 Although it has well-described clinical signs, early forms of the disease may be undetected, even when computer-assisted videokeratography techniques or other methods are used to evaluate the cornea.3

Prior to the development of refractive surgery, it was considered sufficient to diagnose clinically evident keratoconus.4 However, given the spread of refractive surgery,5 a careful differentiation between normal and keratoconus cases is essential to avoid postoperative complications such as keratectasia.6

Classification represents an important process in medical care. To help with this task, predictive models are used in a variety of medical domains, including diagnostic. These models are usually based on knowledge acquired from actual cases stored in databases. The data used to build these models can either be preprocessed and expressed in a set of rules or serve as training data for statistical or machine learning models.7

Machine learning models have already been used in keratoconus detection. Previous papers have focused on the assessment of neural networks in keratoconus diagnosis; however, only multi-layer perceptron (MLP) and anterior topographic data have been used.3,5,8,9

Likewise, the most popular MLP artificial neural networks, support vector machine (SVM) and radial basis function neural network (RBFNN), also represent supervised learning methods that can be used for regression or classification.10

The Orbscan II™ (Bausch & Lomb) is a hybrid system that acquires data through slit-scanning and Placido ring technology. This instrument is able to map multiple ocular surfaces beyond the anterior corneal surface.11 A well-known theorem in prediction theory states that, when more variables describing an event can be measured, the model can predict the outcome more precisely.12 Thus, we hypothesized that a high accuracy in the classification of keratoconus subjects can be reached when Orbscan II data are used to develop supervised learning methods.

In this study, we evaluated the performance of SVM, MLP and RBFNN to detect keratoconus apart from all other corneal patterns, using Orbscan II data.

METHODS

This study was composed of three phases. First, Orbscan II data were retrospectively collected from medical records. In the second phase, these data were preprocessed in order to properly present them to the classifiers. SVM, MLP and RBFNN classifications were applied in the third phase. Subjects were enrolled from patients examined at the private practice of one of the authors (M.B.S.) between January 2004 and January 2009. Research followed the tenets of the Declaration of Helsinki, and Institutional Review Board approval was obtained.

Only one eye of each patient was randomly included in the study. Diagnostic classification for all patients was obtained from medical records and Orbscan II data review.

The examinations were classified into four different corneal categories: normal, astigmatism, keratoconus (KC) and photorefractive keratectomy (PRK).

The maps were classified as keratoconus if they had a central corneal power superior to 48.7 D, an inferior superior asymmetry (I-S) above 1.913,14 or at least one of the following biomicroscopic findings: Vogt's striae or Fleischer's ring.

Clinically diagnosed normal eyes, with no abnormal flattening or steepening on tangential map and absence of irregular astigmatism, were included in the normal (<1.5 D cylinder) or astigmatism (≥1.5 D cylinder) groups.

Orbscan II maps with poor corneal coverage, missing data points, poor fixation or lid artifacts were excluded.

The machine classifiers were developed to detect the presence of KC apart from other cornea patterns.

WEKA software15 version 3.6.2 was used to implement the SVM and RBFNN classifiers, and NETLAB16 software was used to implement the MLP model. Although the holdout method is the simplest technique for “honestly” estimating error rates, a single random partition can be misleading for small or moderately sized samples, and multiple train-and-test experiments can do better. In order to find the best classifier parameters and to evaluate their generalization ability, a 10-fold cross-validation was used. In 10-fold cross-validation, the cases are randomly divided into 10 mutually exclusive test partitions of approximately equal size.17 At each train-and-test experiment, nine partitions are used for training and one partition for testing the performance.17

Pooled examinations from the four corneal categories were randomly divided into each of the 10 partitions used to train and test the classifiers. The performance of the classifiers reflected the ability to detect keratoconus apart from the other non-keratoconus patterns in the test partitions.

We also applied a receiver operating characteristic (ROC) analysis, to obtain a ROC curve, and calculated the area under the curve (AROC).18–21

Data collection

All Orbscan II tests were performed by experienced examiners using the acquisition protocol recommended by the manufacturer. The center of the map was the apex determined by Placido data. Floating alignment and a cornea fit zone of 9 mm were applied for best-fit spheres in all cases.

Eleven quantitative attributes from each Orbscan II examination were used as input data for the algorithms: anterior best-fit sphere, posterior best-fit sphere, astigmatism, maximum and minimum simulated keratometry, index of irregularity of the central 5 mm, thinnest point pachymetry, central corneal power in diopters, I-S, maximum anterior elevation and maximum posterior elevation (Table 1).

Table 1.

Attributes used as input data for the machine learning classifiers.

Attributes  Description 
Anterior best-fit sphere  Anterior best-fit sphere, using a floating alignment in a cornea fit zone of 9 millimeters 
Posterior best-fit sphere  Posterior best-fit sphere, using a floating alignment in a cornea fit zone of 9 millimeters 
Simulated astigmatism  Simulated astigmatism provided by Orbscan II 
5mm irregularity  Index of irregularity of the central 5 mm provided by Orbscan II 
Maximum simulated keratometry  Maximum simulated keratometry provided by Orbscan II 
Minimum simulated keratometry  Minimum simulated keratometry provided by Orbscan II 
Maximum anterior elevation  Highest anterior elevation point over the best-fit sphere within the central 5 mm 
Maximum posterior elevation  Highest posterior elevation point over the best-fit sphere within the central 5 mm 
Thinnest point  Thinnest point pachymetry provided by Orbscan II 
I-S  Difference between superior and inferior average powers of 15 data points, located approximately 2.5 to 3.0 mm peripheral to the corneal vertex, at 30° intervals 
Central corneal power  Average dioptric power of rings 2, 3 and 4, on sagittal topography 

The I-S value was calculated as the difference between the superior and inferior average powers of 15 data points, located approximately 2.5–3.0 mm peripheral to the corneal vertex, at 30° intervals.13,14

The central corneal power was obtained by averaging the dioptric power points on rings 2, 3 and 4, based on sagittal topography.13,14

The maximum anterior and posterior elevations were defined by the highest elevation point over the best-fit sphere within the central 5 mm of the Orbscan II map.

Data preprocessing

In order to avoid significant differences between variable magnitudes, all features were normalized to have zero mean and unit standard deviation. To normalize the data, we treated each input variable independently and, for each variable xi, we calculated its mean

and variance
.16 The rescaled variables were given by:

RBFNN

The RBFNN is a universal approximator and the main practical alternative to the MLP for non-linear modeling. It is characterized by a layer of input nodes, a layer of output nodes and one intermediate or hidden layer.16 The hidden layer performs a non-linear transformation from the input space into a high-dimensional space. The output layer applies a linear transformation from the hidden space to the output space. The idea behind a non-linear transformation followed by a linear transformation involves the fact that a complex pattern classification problem cast in a high-dimensional space is more likely to be linearly separable than in a low-dimensional space.10

Each processing unit in the hidden layer implements a radial basis function. Among the various functions tested as activation functions for RBFNN, we chose the Gaussian function, as this function is preferred in pattern classification applications.22,23

The RBFNN available in the WEKA system uses a k-means clustering algorithm to determine the centers and widths of the radial basis functions; the weights are determined by logistic regression. The adjustable parameters included the number of clusters and the ridge parameter for linear regression.23 These parameters were experimentally determined. The number of clusters tested were 2, 3, 4, 5, 6, 8, 10, 15, 30 and 50, and the ridge parameters tested were 1×10−8, 1×10−7,…, 1×101. Accuracy was used for model selection.

SVM

The support vector machine is a learning method developed from statistical learning theory. Like the previous approach, it can be applied to both classification and regression. After the input space is mapped into a high-dimensional space, SVM uses a kernel function to find a hyperplane that maximizes the separation between two classes.10,24

The SVM was implemented using Platt's sequential minimal optimization algorithm24 with a radial basis function kernel. Two parameters were experimentally optimized, the complexity parameter (C) and the width of the Gaussian function (σ). We varied C between 2−5, 2−4, 2−3, …, 24, and σ between 1×10−8, 1×10−7, …, 1×101. Accuracy was used for model selection.

Once the outputs of SVM are binary decisions, to obtain proper probability estimates, we used the option that fits logistic regression models to the outputs of the support vector machine.

MLP

A standard multi-layer perceptron neural network is characterized by a layer of input nodes, a layer of output nodes and one or more intermediate or hidden layers.25 In our study, we evaluated neural networks with a single hidden layer, with 11 units in the input layer and a single output neuron.

To determine the number of neurons in the hidden layer, we experimentally evaluated the performance of different neural network configurations, measuring the accuracy achieved on the validation set. The number of hidden neurons tested varied from a minimum of 1 to a maximum of 70 neurons. Weights and biases were initially generated from a spherically symmetric Gaussian distribution with a mean equal to zero16 and, as any training run is sensitive to initial connection weights, accuracy was measured and averaged for a total of 20 runs for each hidden layer configuration.

The hyperbolic tangent activation function was used for neurons in the hidden layer, and a logistic activation function was used for the output neuron. The cross-entropy error function simplifies the optimization process when the logistic activation function is used in the output layer; thus, we considered this an appropriate choice.26 The scaled conjugate gradient27 was the training algorithm, as it generally shows faster convergence when compared with gradient descent-based techniques.28

It is useless to design a classifier that accurately models the sample data used during development but does poorly on new cases. The nature of this problem is called over-fitting of the classifier to the data. In order to avoid over-fitting during training, a validation set and weight decay regularization were used. A penalty term (EW) proportional to the sum of squared weights was added to the cross-entropy error function (ED). The function can be expressed as:

A large or small value of the regularization parameter α can lead to under-fitting or over-fitting respectively. The values of α evaluated were between 0 and 0.4, in 0.05 steps.

In order to find the best neural network architecture, we chose the MLP that achieved the highest accuracy with the simplest architecture.

RESULTS

A total of 318 subjects were enrolled in the study, 129 males (41%) and 189 females (59%). The mean age was 38.1±9.7 years. Subjects were classified into four categories: normal (n = 172), astigmatism (n = 89), keratoconus (n = 46) and photorefractive keratectomy (n = 11).

The parameters that reached the best performance for the RBFNN were 8 clusters and a ridge of 1×10−8. For the SVM classifier, a C value of 0.5 and an σ value of 1×10−6 were used. The MLP reached the best performance with a regularization parameter (α) of 0.15 and 17 hidden units.

ROC curves for classifying eyes as keratoconus or non-keratoconus were determined for each machine learning technique and each individual attribute.

Sensitivity, specificity and AROC given by each individual Orbscan II attribute and by the machine learning classifiers are showed in Table 2. To ease the comparison of the results, we have displayed the sensitivity at defined specificities. Sensitivities at 75% and 90% were chosen arbitrarily to represent moderate and high specificity respectively (Table 2).

Table 2.

Areas under ROC curves and sensitivities at fixed specificities for detecting keratoconus apart from all other patterns for all techniques and attributes.

Technique  AROC  SE  Sensitivity at 75% specificity (%)  Sensitivity at 90% specificity (%) 
SVM  0.99  0.002  100  100 
MLP  0.99  0.002  100  100 
RBFNN  0.98  0.005  98  98 
I-S  0.96  0.007  100  95 
5 mm irregularity  0.95  0.02  93  87 
Maximum anterior elevation  0.95  0.02  89  87 
Maximum posterior elevation  0.94  0.02  91  89 
Thinnest pachymetry point  0.87  0.03  54  47 
Central corneal power  0.86  0.03  73  69 
Maximum simulated keratometry  0.86  0.04  78  69 
Posterior best-fit sphere  0.79  0.04  69  54 
Anterior best-fit sphere  0.78  0.04  65  40 
Minimum simulated keratometry  0.77  0.04  69  50 
Simulated astigmatism  0.71  0.04  43  22 

AROC, area under ROC curve; SE, standard error; SVM, support vector machine; RBFNN, radial basis function neural network; MLP, multi-layer perceptron; I-S, inferior superior asymmetry.

The individual attributes with the highest ROC areas were I-S (0.96), followed by 5 mm irregularity (0.95), maximum anterior elevation (0.95) and maximum posterior elevation (0.94). The areas under the ROC curves of these attributes showed no statistical difference, but were significantly larger than the areas of the other individual attributes (p<0.05).30

There were no differences between the performances of SVM, MLP and RBFNN. The ROC curves of the three classifiers are shown in Figure 1. The AROC of the SVM (0.99), MLP (0.99) and RBFNN (0.98) classifiers were significantly larger than those for all the individual attributes evaluated (p<0.05).29

Figure 1.

ROC curves for detecting keratoconus apart from the other non-keratoconus patterns, computed for support vector machine(SVM), multi-layer perceptron (MLP) and radial basis function neural network classifiers (RBF). AROC, area under ther ROC curve; Se, sensibility; Es, specificity; cut off, cut-off value.

(0.02MB).
DISCUSSION

Early forms of keratoconus can be detected without any slit-lamp sign of keratoconus.30 In these cases, the evaluation of the anterior topography of the cornea is essential.31 Corneal topography maps provide useful information about corneal surface; however, interpretation of this may represent a difficult task, specially because of the many forms in which keratoconus may present.32 Thus, the ability to automatically screen KC corneal topographic patters would be a useful aid in screening candidates for refractive procedures.5

In order to help clinicians, numerical methods and quantitative parameters, calculated from corneal maps3,31,32 or Orbscan II examinations,6,33,34 have been proposed.

Machine learning methods, such as artificial neural networks and discriminant analysis, were already in use for identifying the topographic patterns of KC.5,8,35,36 Unlike the majority of previous publications, in this study, we used Orbscan II examinations instead of anterior topography data alone. In addition to anterior topography, Orbscan II examination provides important information, such as pachymetry and elevation maps. As the analysis of Orbscan II data has already been demonstrated to be useful in KC detection,6,3539 we hypothesized that the processing of Orbscan II data could provide high accuracy in the classification of keratoconus examinations.

Maeda et al,3 Smolek and Klyce8 and Accardo and Stefano5 have already demonstrated the value of a neural network approach in identifying keratoconus patterns from corneal topography, and our work agrees with their results. However, besides the use of Orbscan II data, we also used machine learning models that, although already described previously in other fields of ophthalmology,40–42 have not been used for keratoconus detection. Thus, despite similar results, on account of methodological differences and different populations, it is not possible to compare our results with previous studies.5,8,35,36

In the absence of a definitive or genetic test to detect patients with KC, computer-assisted corneal analysis represents the most effective method.

Although the 5 mm irregularity, I-S, maximum anterior elevation and maximum posterior elevation showed good performance, the results in this study indicate that SVM, MLP and RBFNN classifiers, trained on combined Orbscan II measurements, are superior to all the single parameters evaluated to detect keratoconus. This is in accordance with previous publications that recommended the use of anterior and posterior corneal data, or the association of Orbscan II measurements to improve keratoconus detection ability.39

In our study, simulated astigmatism showed the worst performance of the individual attributes evaluated. Smolek and Klyce8 also reported this observation.

SVM, MLP and RBFNN were effective in detecting keratoconus. There were no differences between the classifiers' performance. It is important to highlight, however, that the performance of the classifiers is always influenced by the datasets used to develop and test the model. Thus, our results may be somewhat overestimated, as we used very similar train-and-test sets, reflecting the characteristics of our clinic population.

Although we trained and tested the classifiers on different data, each data input was generated from the same rather homogeneous pool.

Although similar previous studies have concentrated on MLP, some studies have encouraged the use of SVM and RBFNN classifiers.

RBFNN has some advantages over MLP. In general, RBFNN is more resilient to a bad training set than MLP. In addition, the simple linear transformation in the output layer can be optimized using traditional linear modeling techniques, which are fast and do not suffer from problems such as local minima, which plague MLP training techniques. In addition, using only one single hidden layer removes some design decisions about numbers of layers.43

MLP error surfaces are complex and are characterized by a number of unhelpful features, such as local minima, which correspond to a partial solution for the network in response to the training data. Like RBFNN, a significant advantage of SVM is that, although MLP can suffer from multiple local minima, the solution to a SVM is global and unique. Besides that, fewer samples are required to prevent over-fitting.10

On the other hand, a disadvantage of RBFNN and SVM, in contrast to MLP, is that they give every attribute the same weight. Hence, they cannot deal effectively with irrelevant attributes.23

It is not known beforehand which parameters are best for a given problem; consequently, some kind of model selection (parameter search) must be done. In this study, we used a grid search. Although time-consuming, the time required to find good parameters with this strategy is not much more than that required by approximations or heuristics methods, as there were only two parameters in each classifier. Another advantage of this method is that the grid search can be easily parallelized, once each pair of parameters tested is independent.44 However, as it is impossible to try all possible combinations, any model can provide only a suboptimal result.

The evaluation criteria used to report results are directly implicated with the results of a classifier. As our study focuses on whether it is possible to distinguish one class of data from others, based on the same set of measurements, we used only the discrimination ability to access the model performance.45

Orbscan II can provide plenty of data,11 and it is known that, as the number of variables used to train a learning method increases, so does the amount of information available. However, as features are added, more samples are needed to prevent over-fitting. In order to avoid this situation, known as “the curse of dimensionality”,46 we limited the data used. However, despite the satisfactory performance, we believe that the use of a different combination of attributes, selected with a data-mining strategy from a bigger database, could be associated with performance improvement.

The ability to screen automatically for keratoconus patterns would be a helpful tool in clinical practice, especially if the classifier has the ability to detect early cases, once it is clinically easy to identify KC by clinical signs.

In accordance with previous studies, we did not include maps that could not be classified differently from suspected keratoconus.5,39 This strategy was adopted to allow more precise criteria in the assessment of the results, as the main purpose of this study is to evaluate keratoconus detection.

However, we believe that further investigation, with the inclusion of suspected keratoconus or other confusing patterns, would be desirable once there is a wide range of corneal patterns in clinical practice. Although the inclusion of these patterns can increase the false-positive rate, in order to screen for keratoconus, a method with high sensitivity would be more appropriate than a method with high specificity, as the risk of misclassifying a keratoconus subject is greater than misclassifying a normal subject.

In general, physicians will not accept and act on the advice of a computer system without knowing the basis for the system's decision,47 and one of the greatest disadvantages of the methods tested in this study is their inability to produce meaningful explanations for their decisions.48 However, some factors in the keratoconus detection task represent favorable indicators for applying them: 1) an outcome influenced by multiple factors; 2) the need for results that apply to an individual rather than to a population; and 3) the desirability of constructing composite indices from multiple measurements.49

A good KC screening tool should identify the largest number of cases, with the minimum possible number of false-positives. Overall, our results suggest that SVM, MLP and RBFNN classifiers, trained on Orbscan II data, could represent useful techniques for keratoconus detection. We believe that future work, with larger databases and the use of different combinations of attributes, would probably be associated with better results.

REFERENCES
[1]
JH Krachmer , RS Feder , MW Belin .
Keratoconus and related noninflammatory corneal thinning disorders.
Surv Ophthalmol, 28 (1984), pp. 322
[2]
YS Rabinowitz .
Keratoconus.
Surv Ophthalmol, 42 (1998), pp. 319
[3]
N Maeda , SD Klyce , MK Smolek , HW Thompson .
Automated keratoconus screening with corneal topography analysis.
Invest Ophthalmol Vis Sci, 35 (1994), pp. 57
[4]
MW Belin , SS Khachikian .
Keratoconus: it is hard to define, but.
Am J Ophthalmol, 143 (2007), pp. 3
[5]
PA Accardo , S Pensiero .
Neural network-based system for early keratoconus detection from corneal topography.
J Biomed Inform, 35 (2002), pp. 9
[6]
HB Fam , KL Lim .
Corneal elevation indices in normal and keratoconic eyes.
J Cataract Refract Surg, 32 (2006), pp. 7
[7]
C Stephan , HA Meyer , H Cammann , M Lein , SA Loening , K Jung , K.-HChun Re: Felix , Graefen Markus , Briganti Alberto , Gallina Andrea , Hopp Julia , WKattan Michael , Huland Hartwig , IKarakiewicz Pierre .
Initial biopsy outcome prediction – head-to-head comparison of a logistic regression-based nomogram versus artificial neural network.
[8]
MK Smolek , SD Klyce .
Current keratoconus detection methods compared with a neural network approach.
Invest Ophthalmol Vis Sci, 38 (1997), pp. 9
[9]
SD Klyce , MD Karon , MK Smolek .
Screening patients with the corneal navigator.
J Refract Surg, 21 (2005), pp. 22
[10]
SS Haykin .
Neural networks: a comprehensive foundation, 2nd ed, Prentice Hall, (1999),
[11]
G Cairns , CN McGhee .
Orbscan computerized topography: attributes, applications, and limitations.
J Cataract Refract Surg, 31 (2005), pp. 20
[12]
JT Holladay .
Standardizing constants for ultrasonic biometry, keratometry, and intraocular lens power calculations.
J Cataract Refract Surg, 23 (1997), pp. 70
[13]
N Maeda , SD Klyce , MK Smolek .
Comparison of methods for detecting keratoconus using videokeratography.
Arch Ophthalmol, 113 (1995), pp. 4
[14]
YS Rabinowitz , PJ McDonnell .
Computer-assisted corneal topography in keratoconus.
Refract Corneal Surg, 5 (1989), pp. 8
[15]
M Hall , E Frank , G Holmes , B Pfahringer , P Reutemann , IH Witten .
The WEKA Data Mining Software: An Update.
SIGKDD Explorations, 11 (2009),
[16]
YT Nabney .
Netlab: Algorithms for Pattern Recognition, 4 ed, Springer, (2004),
[17]
SM Weiss , CA Kulikowski .
Computer systems that learn : classification and prediction methods from statistics, neural nets, machine learning, and expert systems, M. Kaufmann Publishers, (1991),
[18]
JA Hanley , BJ McNeil .
The meaning and use of the area under a receiver operating characteristic (ROC) curve.
Radiology, 143 (1982), pp. 36
[19]
U Scheipers , C Perrey , S Siebers , C Hansen , H Ermert .
A tutorial on the use of ROC analysis for computer-aided diagnostic systems.
Ultrason Imaging, 27 (2005), pp. 98
[20]
RC Prati , GEAPA Batista , MC Monard .
Evaluating classifiers using ROC curves.
IEEE América Latina, 6 (2008), pp. 22
[21]
DG Altman , JM Bland .
Diagnostic tests 3: receiver operating characteristic plots.
[22]
AG Bors , I Pitas .
Median radial basis function neural network.
IEEE Trans Neural Netw, 7 (1996), pp. 64
[23]
IH Witten , E Frank .
Data Mining: Practical Machine Learning Tools and Techniques, 2nd ed, Morgan Kaufman, (2005),
[24]
J Platt .
Fast training of support vector machines using sequential minimal optimization.
Advances in Kernel Methods: Support Vector Learning,
[25]
RD Reed , RJ Marks .
Neural Smithing: Supervised Learning in Feedforward Artificial Neural Networks, The MIT Press, (1999),
[26]
M Garcia , CI Sanchez , MI Lopez , D Abasolo , R Hornero .
Neural network based detection of hard exudates in retinal images.
Comput Methods Programs Biomed, 93 (2009), pp. 19
[27]
M Moller .
A scaled conjugate gradient algorithm for fast supervised learning.
Neural Networks, 6 (1993), pp. 33
[28]
CM Bishop .
Neural Networks for Pattern Recognition, Clarendon Press; Oxford university Press, (1995),
[29]
JA Hanley , BJ McNeil .
A method of comparing the areas under receiver operating characteristic curves derived from the same cases.
Radiology, 148 (1983), pp. 43
[30]
K Zadnik , JT Barr , MO Gordon , TB Edrington .
Biomicroscopic signs and disease severity in keratoconus.
Collaborative Longitudinal Evaluation of Keratoconus (CLEK) Study Group. Cornea, 15 (1996), pp. 46
[31]
YS Rabinowitz , K Rasheed .
KISA% index: a quantitative videokeratography algorithm embodying minimal topographic criteria for diagnosing keratoconus.
J Cataract Refract Surg, 25 (1999), pp. 35
[32]
YS Rabinowitz .
Videokeratographic indices to aid in screening for keratoconus.
J Refract Surg, 11 (1995), pp. 9
[33]
SC Pflugfelder , Z Liu , W Feuer , A Verm .
Corneal thickness indices discriminate between keratoconus and contact lens-induced corneal thinning.
[34]
T Tanabe , T Oshika , A Tomidokoro , S Amano , S Tanaka , T Kuroda , et al.
Standardized color-coded scales for anterior and posterior elevation maps of scanning slit corneal topography.
[35]
N Maeda , SD Klyce , MK Smolek .
Neural network classification of corneal topography. Preliminary demonstration.
Invest Ophthalmol Vis Sci, 36 (1995), pp. 35
[36]
LA Carvalho .
Preliminary results of neural networks and zernike polynomials for classification of videokeratography maps.
[37]
L Lim , RH Wei , WK Chan , DT Tan .
Evaluation of keratoconus in Asians: role of Orbscan II and Tomey TMS-2 corneal topography.
Am J Ophthalmol, 143 (2007), pp. 400
[38]
SN Rao , T Raviv , PA Majmudar , RJ Epstein .
Role of Orbscan II in screening keratoconus suspects before refractive corneal surgery.
[39]
B Sonmez , MP Doan , DR Hamilton .
Identification of scanning slit-beam topographic parameters important in distinguishing normal from keratoconic corneal morphologic features.
Am J Ophthalmol, 143 (2007), pp. 8
[40]
C Bowd , FA Medeiros , Z Zhang , LM Zangwill , J Hao , TW Lee , et al.
Relevance vector machine and support vector machine classifier analysis of scanning laser polarimetry retinal nerve fiber layer measurements.
Invest Ophthalmol Vis Sci, 46 (2005), pp. 9
[41]
C Bowd , MH Goldbaum .
Machine learning classifiers in glaucoma.
Optom Vis Sci, 85 (2008), pp. 405
[42]
MH Goldbaum , PA Sample , K Chan , J Williams , TW Lee , E Blumenthal , et al.
Comparing machine learning classifiers for diagnosing glaucoma from standard automated perimetry.
Invest Ophthalmol Vis Sci, 43 (2002), pp. 9
[43]
I StatSoft .
Eletronic statistics texbook.
[44]
C Hsu , C Chang , C Lin .
A Practical Guide to Support Vector Classification, Departament of Computer Science, National Taiwan University, (2010),
[45]
S Dreiseitl , L Ohno-Machado .
Logistic regression and artificial neural network classification models: a methodology review.
J Biomed Inform, 35 (2002), pp. 9
[46]
R Bellman .
Adaptive Control Processes: a Guided Tour, 1 ed, Princeton University Press, (1961),
[47]
RL Teach , EH Shortliffe .
An analysis of physician attitudes regarding computer-based clinical consultation systems.
Comput Biomed Res, 14 (1981), pp. 58
[48]
CE Kahn Jr .
Artificial intelligence in radiology: decision support systems.
Radiographics, 14 (1994), pp. 61
Copyright © 2010. CLINICS
Download PDF
Article options
es en pt

¿Es usted profesional sanitario apto para prescribir o dispensar medicamentos?

Are you a health professional able to prescribe or dispense drugs?

Você é um profissional de saúde habilitado a prescrever ou dispensar medicamentos