Transfer learning is suitable for medical image classification
In transfer learning, the neural network is trained in two stages: 1) pretraining, where the network is generally trained on a large-scale benchmark dataset representing a wide diversity of labels/categories (e.g., ImageNet); and 2) fine-tuning, where the pre-trained network is further trained on the specific target task of interest, which may have fewer labelled examples than the pretraining dataset. The pretraining step helps the network learn general features that can be reused on the target task. In the context of transfer learning, standard architectures designed for ImageNet with corresponding pre-trained weights are fine-tuned on medical tasks ranging from interpreting chest x-rays and identifying eye diseases, to early detection of Alzheimer’s disease.