Naive Bayes is one of the simple and effective machine learning classifiers which is based on the Bayesian theorem. It is designed to get more accuracy when the input size is high. This algorithm assumes that the value of a particular feature doesn’t depend on the value of any other feature on the given set of data.
Let’s consider a hypothesis H and let E be the evidence.
Now, according to the Bayes’ Theorem the probability of
H before getting E and the probability of H after getting
E is:
P(H/E) = P(E/H) P(H)
P(E)