Our previous
decision tree model for detecting diabetes comprises five risk factors, including age, waist/hip ratio (WHR), waist, duration of hypertension and weight, for an AUC of 0.731.
Input: an attribute set dataset D Output: a
decision tree (a) Tree = {} (b) if D is "pure" or other end conditions are met, then (c) terminate (d) end if (e) for each attribute a [member of] D do (f) compute information gain ratio (InGR) (g) end for (h) [a.sub.best] = attribute with the highest InGR (i) Tree = create a tree with only one node [a.sub.best] in the root (j) [D.sub.v] = generate a subset from D except [a.sub.best] (k) for all [D.sub.v] do (l) subtree = C4.5 ([D.sub.v]) (m) set the subtree to the corresponding branch of the Tree according to the InGR (n) end for The training steps of the LVQ algorithm are as follows.
The core algorithm for building
Decision Tree used in this paper is based on iD3 and J 48 that uses Entropy and information Gain to construct a
Decision Tree.
This section mentions the results of J48
Decision Tree algorithm for classification of skin disease as shown in Figure 3, Table 4 and Table 5 respectively.
Comparison of artificial neural network and
decision tree algorithms used for predicting live weight at post weaning period from some biometrical characteristics in Harnai sheep.
The overall prediction accuracy of the
decision tree model is (54+253)/347=88.5 percent, indicating an acceptable prediction model.
Min, Lee and Han (2006) mention other pros and cons of
decision trees and its applicability.
(1) The establishment of
decision tree model; (2)
decision tree classification in ENVI; (3) accuracy evaluation in ENVI.
OECD countries were categorized in 6 groups according to the
decision tree model.