Abstract

Lymphoma and leukemia are fatal syndromes of cancer that cause other diseases and affect all types of age groups including male and female, and disastrous and fatal blood cancer causes an increased savvier death ratio. Both lymphoma and leukemia are associated with the damage and rise of immature lymphocytes, monocytes, neutrophils, and eosinophil cells. So, in the health sector, the early prediction and treatment of blood cancer is a major issue for survival rates. Nowadays, there are various manual techniques to analyze and predict blood cancer using the microscopic medical reports of white blood cell images, which is very steady for prediction and causes a major ratio of deaths. Manual prediction and analysis of eosinophils, lymphocytes, monocytes, and neutrophils are very difficult and time-consuming. In previous studies, they used numerous deep learning and machine learning techniques to predict blood cancer, but there are still some limitations in these studies. So, in this article, we propose a model of deep learning empowered with transfer learning and indulge in image processing techniques to improve the prediction results. The proposed transfer learning model empowered with image processing incorporates different levels of prediction, analysis, and learning procedures and employs different learning criteria like learning rate and epochs. The proposed model used numerous transfer learning models with varying parameters for each model and cloud techniques to choose the best prediction model, and the proposed model used an extensive set of performance techniques and procedures to predict the white blood cells which cause cancer to incorporate image processing techniques. So, after extensive procedures of AlexNet, MobileNet, and ResNet with both image processing and without image processing techniques with numerous learning criteria, the stochastic gradient descent momentum incorporated with AlexNet is outperformed with the highest prediction accuracy of 97.3% and the misclassification rate is 2.7% with image processing technique. The proposed model gives good results and can be applied for smart diagnosing of blood cancer using eosinophils, lymphocytes, monocytes, and neutrophils.

1. Introduction

Leukemia and lymphoma are the most frequent kinds of blood cancer in people of all ages, particularly young people. This abnormal situation is induced by red blood cell proliferation and immature growth, which can harm red blood cells, bone marrow, and the immune system [1]. Leukemia accounts for more than 3.5% of new cancer cases in the United States, with over 50,000 new cases diagnosed in 2018 [2]. Cancerous lymphoblasts in the blood travel to other organs, including the heart, brain, lungs, and arteries, before spreading to important tissues throughout the body. Red blood cells are normally in charge of transporting oxygen from the heart to all organs. They make up half of the total blood volume. White blood cells, on the other hand, serve an important role in the human immune system, serving as the first line of protection against a variety of diseases and disorders [3]. As a result, accurately identifying these white blood cells is crucial in understanding the symptoms of the problem. Their categorization is determined by their cytoplasmic composition. Changes in lymphocytes, a kind of white blood cell, cause acute lymphoblastic leukemia [4]. Acute or chronic leukemia are the two types of leukemia. Without treatment, the typical recovery period for acute myeloid leukemia is roughly three months; however, the time of appearance of chronic leukemia is longer than that of acute leukemia. Chronic lymphoblastic leukemia is the most common kind of acute leukemia, accounting for around 25% of all juvenile malignancies [5, 6]. Early detection of leukemia and lymphoma has always been difficult for researchers, clinicians, and hematologists. Leukemia symptoms include enlarged lymph nodes, paleness, fever, and weight loss, although these symptoms can also be caused by other diseases [7]. Because of the moderate nature of the symptoms, diagnosing leukemia and lymphoma in its early stages is challenging. PBS microscopic assessment is the most often used leukemia and lymphoma diagnostic approach, while the gold standard for leukemia and lymphoma diagnosis only entails obtaining and analyzing white blood cell samples [8]. Several research have utilized machine learning and deep learning and machine diagnostics approaches to laboratory image processing during the last two decades in the hopes of pushing the boundaries of late diagnosis of leukemia and lymphoma and establishing their subtypes [9]. In this research, blood smear pictures were evaluated to diagnose, distinguish, and count cells in distinct kinds of leukemia and lymphoma [10].

Deep learning is a famous artificial intelligence area that comprises algorithms and statistical associations. It has quickly permeated the field of clinical research. Deep learning allows you to teach machines without prior skills and discover your knowledge. The application of these technologies to medical data processing achieved outstanding results and was especially beneficial in disease detection [11]. Deep learning techniques, according to studies, significantly help [12] the complex medical decision-making processes in medical image processing [13] by extracting and then analyzing characteristics from these pictures [14]. As the number of medical diagnostic instruments increased and a considerable volume of high-quality data was created, there was an urgent demand for more powerful data processing technologies. Conventional data analysis methods were incapable of analyzing such massive volumes of data or identifying data trends.

The proposed study employs deep learning procedures driven by transfer learning and image processing to overcome the limitations of earlier investigations. The following are the study’s significant contributions:(i)The proposed study used transfer learning incorporated with various algorithms for better prediction results(ii)The proposed study used a generic approach and comparative analysis techniques have shown that the proposed study of deep learning empowered with transfer learning with image processing outperformed using the white blood cell dataset(iii)For enhanced results, the proposed study uses image processing practices(iv)Private data cloud techniques are used for data and model security(v)For performance evaluation, the proposed model used numerous performance matrices

2. Literature Review

Image analysis of contaminated blood cells is frequently separated into four stages: preparation, extraction, feature engineering, and classification. Extensive research has been conducted on numerous types of cancer, including leukemia and lymphoma. Zhang et al. proposed a convolutional neural network (CNN) model for the nonsegmented direct sorting of tissue samples into healthy and sick cells [15]. To categorize distinct kinds of white blood cells in the body, Zhao et al. offered machine-learning approaches such as CNN, support vector machine (SVM), random forest, and others [16].

The relative white blood cell ratio is used to determine the morphology of leukemia and lymphoma. Deep learning-based computational analysis has shown promise as a diagnostic technique for heterogeneous white blood cell count. Deep learning was demonstrated by Choi et al. [17] and Qin et al. [18] to categorize white blood cells at numerous stages of maturation, laying the foundation for a deep learning-based diagnosis of leukemia and lymphoma; however, these research findings had constraints due to a shortage of cell types and poor sensitivity, respectively, and classification was usually done using precompiled images rather than raw clinical images. The white blood cell disparity ratio for myeloid analysis is an important use of transfer learning that still requires improvement.

Karthikeyan and Poornima [19] described a novel method for segmenting and classifying acute myeloid and lymphomas that were preprocessed using histogram equalization and median filtering. Two techniques for lymphocyte segmentation were compared: clustering algorithm and k-mean. For lymphocyte segmentation, fuzzy c-means clustering beats k-means clustering. After that, the support vector machine was utilized to separate normal and blast cells. MoradiAmin et al. [20] separated background lymphocytes using widespread pooling of C medium to increase the identification of acute lymphocytic leukemia cells. Following the extraction of diverse shape-based data, they used hierarchical clustering to minimize the number of parameters before providing them to assist the SVM classifier for normal and popping cell categorization.

The authors of the study employed computer vision techniques to overcome the difficulties of manual counting. In this situation, the picture has been preprocessed to remove the chance of distortion, and the proportion of white blood cells to red blood cells has been determined to determine if the image is normal or abnormal for detecting Salihah et al. [21]. Horie et al. [22] proved the diagnostic effectiveness of deep learning approaches such as CNN for esophagitis, including melanoma and adenocarcinoma, with a sensitivity of 98%.

The authors [23] developed a CNN model for leukemia prediction that featured three key steps: CNN comparison stretching and edge extraction, followed by transfer learning depth feature extraction. In [24], authors developed a method for distinguishing tainted pictures from healthy ones that uses a convolutional neural network. Furthermore, the clustering method using the EM approach is utilized to compute the rate of infection spread thus far. The study [25] proposed computer-aided diagnostic methods for leukemia cancer classification using an ensembled SVM learning approach. The authors proposed a supervised machine learning approach to identify blood cancer and then categorise them using a fully integrated network [26].

The study offered classification models for distinguishing microscopic pictures of blood from leukemia and lymphoma patients from those who were not [27]. A pretrained CNN named AlexNet, as well as numerous additional classifiers, are utilized to extract the features. In comparison to other classifiers, the support vector machine fared better in tests. In the second model, AlexNet is used for extraction and classification only when the results demonstrate that it outperforms other models on various performance criteria.

The authors of this study [28] presented a computational method for detecting acute leukemia and lymphoma. To begin, focusing was applied to digital microscope pictures to decrease noise and blurring. Color, form, texture, and statistics were identified and classed as benign or malignant. Classification models based on K-nearest neighbors and naive Bayes were utilized. Experiments with sixty blood analyzers proved the effectiveness of the K-nearest neighbor (KNN), which had a 92.8% accuracy rate.

The authors of this study [29] created a mechanism for categorizing acute myelogenous leukemia cells into subgroups. Initially, cells were segmented using a color k-means technique. Following that, six statistical characteristics were retrieved and fed into a multiclass SVM classifier. The data yielded a maximum aiming accuracy of 87% and a maximum accuracy of 92.9%.

According to the study [30], a three-tier approach including extracting features, coding, and categorization is recommended. The goal of this approach was to evaluate whether or not a patient has leukemia or lymphoma based on a picture of a sample of blood from a particular patient. A thick structurally complex transformation was used in feature extraction. At the encoding layer, the size of the recovered feature vectors was lowered. Finally, a multiclass support vector machine classifier was used to do the classification. Experiments with four hundred samples resulted in an accuracy of 79.38%.

The study [31] established a classification system for acute myelogenous leukemia that segments grains using contour and k-signature methods. Then, utilizing the morphology, characteristics such as cell volume, cell color, and so on were retrieved. Studies on a dataset of one hundred pictures found that the SVM classifier had up to 92% classification accuracy.

The study [32] described an automated microscopic image-based technique for diagnosing leukemia and lymphoma. The technique begins by reducing noise and blur during preprocessing. The white blood cells were then separated using the k-means and Zack algorithms. Following that, chromatic, statistical, geometric, and textural elements were restored. Finally, to distinguish between healthy and unhealthy images, an SVM classifier was utilized. Trials on a dataset of twenty-seven pictures revealed a 93.57% classification accuracy.

The research [33] created a categorization system for three forms of acute myeloid leukemia and acute multiple myeloma leukemia and acute lymphocytic leukemia. Twelve characteristics were extracted by hand from picture samples. Finally, for classification, a K-nearest neighbor predictor was applied. Experiments using a sample of 350 photos yielded 86% reliability. The authors [34] developed a five-characteristic method for acute myelogenous leukemia categorization that improved picture contrast. An SVM classifier was used to categorize the data. Experiments with 51 photos provided a categorization accuracy of 93.5%.

To categorize acute lymphocytic leukemia and its subtypes, the authors [35] recommended using a CNN network termed a convolutional network. A dataset of 373 pictures was utilized for the evaluation, and an accuracy of 80% was reached. Experiments confirmed this method’s superiority over a range of earlier techniques. The authors [36] used numerous deep learning approaches to predict blood cancer cells including CNN and SVM and they achieved 97.04% prediction accuracy.

Previous research on the prediction of blood cancer utilizing white blood cells using machine learning and deep learning had significant drawbacks. The limitations of prior investigations are shown in Table 1.

As Table 1 depicts the summary of all previous research that predicted blood cancer using machine learning, deep learning, and transfer learning techniques, every study has its own limitations. So, this study has the advantage of coping with all these limitations wisely during blood cancer prediction.

3. Materials and Methods

Figure 1 depicts an overview of the suggested approach for predicting blood cancer using transfer learning-enabled malignant white blood cells. The proposed process begins with data input from several hospital sources. After collecting all white blood cell (WBC) samples, data augmentation techniques were used to overcome the challenges of mining data samples for WBC classes. The proposed methodology depicts four major steps in the whole prediction process for blood cancer. In the first phase, the proposed framework collects data from the hospital, preprocessed blood samples, divides all preprocessed samples into training testing sample ratios, and stores these split samples in private for easy access at any step. The training phase imports training data samples from the private cloud and trains the AlexNet, ResNet, and MobileNet algorithms with stochastic gradient descent (SGD) with momentum, adaptive momentum estimation, and signal propagation algorithms using squares of the root. After training all algorithms of AlexNet, ResNet, and MobileNet, applying learning criteria techniques, if the learning criteria match the proposed framework expectations, then the trained model is stored separately on each algorithm’s private cloud. If the proposed model does not meet the learning criteria then apply image processing techniques such as histogram equalization and again train the model and check the learning criteria.

In the third phase, choose the semi-best-trained algorithms from all private clouds and store them in another private cloud for the further testing processes. In the last phase, which is known as the testing phase, import blood samples from the cloud, import the best-performed trained model from the model secluded, and apply the testing process to predict the cancerous white blood cells. Finally, the proposed framework used numerous statistical matrices [3743], e.g., classification accuracy (CA), negative predicted value (NPV), sensitivity, specificity, f1-score, miss-classification rate (MCR), positive predicted value (PPV), likelihood positive ratio (LPR), false negative rate (FNR), likelihood negative ratio (LNR), false positive rate (FPR), and Fowlkes Mallows index (FMI), all statistical matrix equations are given as follows:

for true class and is for the predicted class:

The descriptive algorithm of the proposed study for blood cancer prognostication utilizing transfer learning-enhanced white blood cells is shown in Table 2. It represents the specifics of the proposed framework’s complete procedure.

3.1. Dataset

In this proposed study, the dataset is acquired from the online source Ghaderzadeh et al. [43]. The dataset consists of four classes named eosinophils, lymphocytes, monocytes, and neutrophils. Machine learning algorithms are highly data hungry [44].

The proposed framework dataset consists of 10,000 instances, and each class instance consists of almost 2,500 blood samples. Figure 2 shows the sample data instance of each blood sample class.

4. Image Processing

To overcome the classification accuracy deficiency problem, the proposed framework uses the image processing technique e-g histogram equalization to enhance the quality of blood samples. With the help of histogram equalization, as shown in Figures 3 and 4, the proposed framework enhances the contrast and intensity of pixels in blood samples. Equation (3) represents the distribution function of histogram equalization.

5. Simulation Results and Discussion

In this article, transfer learning has been used for the prediction of blood cancer empowered with eosinophils, lymphocytes, monocytes, and neutrophils incorporated into the blood cell dataset [45]. For simulation purposes to train and test the data sample, the proposed model used a MacBook Pro 2017, 16 giga byte random access memory, and Core i5 with a 512 Giga byte solid state drive. The proposed framework divided the dataset into 70% and 30% blood cells for training and testing, respectively. To remove the different anomalies in the dataset, different pre-processing techniques have been implemented. Numerous transfer learning algorithms have been used to train the models and test the data samples. Different phases of training and testing have been discussed and elaborated on in this article. To measure the performance of all trained models and test results, the proposed framework used statistical performance parameters, and all parameters’ equations are mentioned above in the methodology section.

Figure 5 shows the training progress of SGD with momentum using AlexNet without image processing. To train this model, the proposed model set a learning rate of 0.001, 100 epochs, and 58 iterations per epoch. The proposed framework of this training model has a lot of distortion and does not converge until the last epoch; all this distortion happens due to the contrast and pixels unbalancing in the data samples. So, stochastic gradient descent with momentum achieves 75.78% of CA and 24.22% MCR.

Figure 6 shows the training progress of the adaptive momentum association using AlexNet without image processing. To train this model, the proposed study set a learning rate of 0.001, 100 epochs, and 58 iterations per epoch. The proposed framework of this training model has a lot of distortion and does not converge till the last epoch; all this distortion happens due to the contrast and pixels unbalancing in data samples. So, adaptive momentum association achieves 86.72% and 13.28% of classification accuracy and miss-classification rate, respectively.

Figure 7 shows the training progress of root mean square (RMS) propagation using AlexNet without image processing. To train this model, the proposed study set a learning rate of 0.001, 100 epochs, and 58 iterations per epoch. The proposed framework of this training model has a lot of distortion and does not converge till the last epoch; all this distortion happens due to the contrast and pixels unbalancing in the data samples. So, root means square propagation achieves 75.78% of CA and 24.22% MCR.

Figure 8 shows the training progress of SGD with momentum using MobileNet without image processing. To train this model, the proposed study set a learning rate of 0.001, 100 epochs, and 58 iterations per epoch. The proposed framework of this training model has a lot of distortion and does not converge till the last epoch; all this distortion happens due to the contrast and pixel unbalancing in the data samples. So, stochastic gradient descent with momentum achieves 75.00% and 25.00% of CA and MCR, respectively.

Figure 9 shows the training progress of adaptive momentum association using MobileNet without image processing. To train this model, the proposed framework set a learning rate of 0.001, 100 epochs, and 58 iterations per epoch. The proposed framework of this training model has a lot of distortion and does not converge till the last epoch; all this distortion happens due to the contrast and pixels unbalancing in the data samples. So, adaptive momentum association achieves 82.03% of classification accuracy and 17.97% miss-classification rate, respectively.

Figure 10 shows the training progress of RMS propagation using MobileNet without image processing. To train this model, the proposed study set a learning rate of 0.001, 100 epochs, and 58 iterations per epoch. The proposed framework of this training model has a lot of distortion and does not converge till the last epoch; all this distortion happens due to the contrast and pixels unbalancing in the data samples. So, root means square propagation achieves 75.00% and 25.00% of CA and MCR, respectively.

Figure 11 shows the training progress of SGD with momentum using ResNet without image processing. To train this model, the proposed study set a learning rate of 0.001, 100 epochs, and 58 iterations per epoch. The proposed framework of this training model has a lot of distortion and does not converge till the last epoch; all this distortion happens due to the contrast and pixels unbalancing in the data samples. So, stochastic gradient descent with momentum achieves 87.50% of classification accuracy and 12.50% miss-classification rate, respectively.

Figure 12 shows the training progress of adaptive momentum association using ResNet without image processing. To train this model, the proposed study set a learning rate of 0.001, 100 epochs, and 58 iterations per epoch. The proposed framework of this training model has a lot of distortion and does not converge till the last epoch, all this distortion happens due to the contrast and pixels unbalancing in the data samples. So, adaptive momentum association achieves 74.22% and 25.78% of classification accuracy and miss-classification rate respectively.

Figure 13 shows the training progress of RMS propagation using ResNet without image processing. To train this model, the proposed study set a learning rate of 0.001, 100 epochs, and 58 iterations per epoch. The proposed framework of this training model has a lot of distortion and does not converge till the last epoch; all this distortion happens due to the contrast and pixels unbalancing in the data samples. So, root means square propagation achieves 71.09% of classification accuracy and 28.91% miss-classification rate, respectively.

Table 3 shows the training results of AlexNet models after image processing; all models tune on 5800 iterations, a 0.001 learning rate, and 100 epochs. So, the stochastic gradient descent moment outperformed the supra each training model and achieves 99.2% of classification accuracy and 0.8% miss-classification rate, respectively.

Figure 14 shows the training progress of SGD moments using AlexNet after image processing. To train this model, the proposed study set a learning rate of 0.001, 100 epochs, and 58 iterations per epoch. The proposed framework of this training model converges before 90 epochs and gives the highest training results.

Table 4 shows the training results of ResNet models after image processing; all models tune on 5800 iterations, a 0.001 learning rate, and 100 epochs. So, RMSPROP outperformed all models and achieves a 69.53% and 30.47% of classification accuracy and miss-classification rate, respectively.

Table 5 shows the training results of MobileNet models after image processing; all models tune on 5800 iterations, a 0.001 learning rate, and 100 epochs. So, RMSPROP outperformed all models and achieves a 73.44% and 26.56% of classification accuracy and miss-classification rate, respectively.

Table 6 shows the test simulation of AlexNet models after image processing. The proposed framework finds that SGDM performs very well as compared with other models. SGDM predicted 719 lymphocyte cancerous cells correctly, 742 monocyte cancerous cells, and 716 neutrophil cancerous cells correctly. SGDM achieves 97.3%, 2.7% of classification accuracy, and miss-classification rate, respectively. ADAM predicted 674 lymphocyte cancerous cells correctly, 703 monocyte cancerous cells, and 720 neutrophil cancerous cells correctly. ADAM achieves 93.7% of classification accuracy, 6.3% of classification accuracy, and a miss-classification rate, respectively. RMSPROP predicted 714 lymphocyte cancerous cells correctly, 680 monocyte cancerous cells, and 692 neutrophil cancerous cells correctly. RMSPROP achieves 93.2%, 6.8% of classification accuracy, and a miss-classification rate, respectively.

Table 7 shows the test simulation of MobileNet models after image processing. The proposed framework finds that RMSPROP performs very well as compared with other models. RMSPROP predicted 729 lymphocyte cancerous cells correctly, 559 monocyte cancerous cells, and 159 neutrophil cancerous cells correctly. RMSPROP achieves 64.9% and 35.1% of classification accuracy and miss-classification rate, respectively. ADAM predicted 676 lymphocyte cancerous cells correctly, 339 monocyte cancerous cells, and 390 neutrophil cancerous cells. ADAM achieves 63.0% of classification accuracy and 37.0% miss-classification rate, respectively. SGDM predicted 678 lymphocyte cancerous cells correctly, 284 monocyte cancerous cells, and 388 neutrophil cancerous cells correctly. SGDM achieves 60.5% of classification accuracy and 39.5% miss-classification rate, respectively.

Table 8 shows the test simulation of MobileNet models after image processing. The proposed framework finds that ADAM performs very well as compared with other models. ADAM predicted 736 lymphocyte cancerous cells correctly, 319 monocyte cancerous cells, and 428 neutrophil cancerous cells. ADAM achieves 66.5%, and 33.5% of classification accuracy and miss-classification rate respectively. RMSPROP predicted 728 lymphocyte cancerous cells correctly, 608 monocyte cancerous cells, and 128 neutrophil cancerous cells. RMSPROP achieves 65.6%, and 34.4% of classification accuracy and miss-classification rate respectively. SGDM predicted 717 lymphocyte cancerous cells correctly, 409 monocyte cancerous cells, and 352 neutrophil cancerous cells correctly. SGDM achieves 66.2% of classification accuracy and 33.8% miss-classification rate, respectively.

Table 9 shows the statistical matrix results of blood cancer prediction after image processing. These results depict that the SGDM of AlexNet outperthiformed all models and achieves 97.3% classification accuracy and a 2.7% miss-classification rate. SGDM of MobileNet performed below the performance line and achieved 60.5% classification accuracy and a 39.5% miss-classification rate.

Table 10 shows the testing results of all models, and SGDM of AlexNet is outperformed as compared with other models and achieves 78.6% classification accuracy, 21.4% of classification accuracy, and miss-classification rate, respectively. But when the proposed framework compared these results with the results of after-image processing, the results of the proposed model after-image processing performed well as compared with these. The proposed framework performed outstandingly as compared with the previous studies.

Table 11 depicts the descriptive comparative analysis of this study with previous work, so the analysis depicts Pansombut et al. [35] achieved 80% classification accuracy, 20% miss-classification rate empowered with the CNN model on publicly available image blood samples, Madhukar et al. [34] achieved 93.5% classification accuracy, 6.5% miss-classification rate empowered with an SVM classifier on publicly available image blood samples. Supardi et al. [33] achieved 86% classification accuracy and a 14% miss-classification rate empowered with the KNN model on publicly available image blood samples, Mishra and Patel [32] achieved 93.57% classification accuracy, 6.53% miss-classification rate empowered with SVM classifier on publicly available image blood samples. Laosai and Chamnongthai [31] achieved 92% classification accuracy and an 18% miss-classification rate empowered with the SVM model on publicly available image blood samples. Faivdullah et al. [30] achieved 79.38% classification accuracy, 20.72% miss-classification rate empowered with the SVM model on publicly available feature-based blood samples. Setiawan et al. [29] achieved 87% classification accuracy and 13% miss-classification rate empowered with SVM and K-means model on publicly available image blood samples. Kumar et al. [28] achieved 92.8% classification accuracy and 17.2% miss-classification rate empowered with CNN, Naïve Bayes, and KNN models on publicly available image blood samples. Loey et al. [27] achieved 94.3% classification accuracy and 5.7% miss-classification rate empowered with CNN and AlexNet models on publicly available image blood samples, on the other side, the proposed study achieved the highest classification accuracy of 97.3% and a 2.7% of miss-classification rate because the proposed framework used image processing techniques empowered with various transfer learning algorithms.

6. Conclusion and Future Work

The early detection of blood cancer using white blood cells can help meritoriously in its cure. The study's proposed framework consists of three transfer learning models AlexNet, MobileNet, and ResNet empowered with SGDM, ADAM, and RMSPROP. The proposed framework applies all transfer learning models with varying learning rates effectively to white blood cancerous cells for early classification. To enhance the results, the proposed study used image processing techniques incorporating transfer learning and achieved the highest CA of 97.3% and 2.7% MCR. All experiments in the proposed study are comprehensively explained with respect to every model training and testing phase. This study helps health 5.0 to predict blood cancer in its early stages for early treatment. Furthermore, in the future, federated machine learning using the fed average technique will play a major role in better early prediction of blood cancer empowered with fuzzed machine learning and deep learning techniques will also apply more statistical techniques such as ANOVA and Chi-Square.

Abbreviations

SVM:Support vector machine
CNN:Convolutional neural network
KNN:K-nearest neighbor
WBC:White blood cells
SGD:Stochastic gradient descent
CA:Classification accuracy
NPV:Negative predicted value
MCR:Miss-classification rate
PPV:Positive predicted value
LPR:Likelihood positive ratio
LNR:Likelihood negative ratio
FNR:False negative rate
FPR:False positive rate
FMI:Fowlkes mallows index
ADAM:Adaptive moment estimation
RMSPROP:Root mean square propagation
ANOVA:Analysis of variance.

Data Availability

The data used in this paper can be requested from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest regarding the publication of this work.