Page 92 - Demo
P. 92
%u062c%u0645%u064a%u0639 %u0627%u0644%u062d%u0642%u0648%u0642 %u0645%u062d%u0641%u0648%u0638%u0629 %u0640 %u0627%u0625%u0644%u0639%u062a%u062f%u0627%u0621 %u0639%u0649%u0644 %u062d%u0642 %u0627%u0645%u0644%u0624%u0644%u0641 %u0628%u0627%u0644%u0646%u0633%u062e %u0623%u0648 %u0627%u0644%u0637%u0628%u0627%u0639%u0629 %u064a%u0639%u0631%u0636 %u0641%u0627%u0639%u0644%u0647 %u0644%u0644%u0645%u0633%u0627%u0626%u0644%u0629 %u0627%u0644%u0642%u0627%u0646%u0648%u0646%u064a%u062992Expected information (entropy) needed to classify a tuple in D: mInfo(D) =%u2212 pi log2(pi )i=1Information needed (after using A to split D into v partitions) to classify D: v | Dj |InfoA(D) = j=1 | D | Info(Dj ) Information gained by branching on attribute A: Gain(A)= Info(D)%u2212 InfoA(D)How we construct the decision tree? Class P: buys_computer = %u201cyes%u201d Class N: buys_computer = %u201cno%u201d age income student credit_rating buys_computer<=30 high no fair no<=30 high no excellent no31%u202640 high no fair yes>40 medium no fair yes>40 low yes fair yes>40 low yes excellent no