Sample Left and Right Fundus Scans
During my time at the University of Waterloo, I wrote a paper studying the applications for Convolutional Neural Networks (CNN's) and Multilayered Perceptrons (MLP's) for classifying ocular diseases from retinal fundus scans of patients. A link to the full paper can be found here: Research Paper.
Skills: Python, Convolutional Neural Networks, Image Processing, Data Cleansing, Model Training, Model Tuning, ADAM
Several models were trained on a rich dataset of 10,000 images varying the following parameters:
- Number of Images
- Image Resolution
- Number of Epochs
- Learning Rate
- Number of Hidden Layers
- Neuron Density The number of neurons in a layer
- Dropout
- Batch Size
Two machine learning approaches were evaluated: Multilayer Perceptrons (MLPs) and Convolutional Neural Networks (CNNs). MLPs treat images as flattened vectors and learn global patterns, while CNNs leverage spatial features and hierarchical patterns within the images using convolutional and pooling layers. Various preprocessing methods and hyperparameters such as image resolution, learning rate, layer depth, dropout, and regularization were tested to optimize model performance.
The CNN model outperformed the MLP, achieving a testing accuracy of approximately 63%, compared to 45% for the MLP. CNNs effectively captured spatial and hierarchical features critical for differentiating ocular diseases, while MLPs struggled due to lack of spatial awareness. Downscaling images to 256x256 pixels and increasing the dataset size improved both training efficiency and accuracy. The study demonstrates CNNs as a promising approach for automated ocular disease diagnosis from fundus images.