mirror of
https://github.com/asimonson1125/Implementations-of-Probability-Theory.git
synced 2026-02-25 06:09:50 -06:00
Complete Bayes unit report pre-advising meeting
This commit is contained in:
Binary file not shown.
@@ -267,7 +267,7 @@ number of documented occurrences is frequently used in philosophical discussion
|
|||||||
are valid classifications even when the subset size results in overall fufilled terms to be infrequently categorized as the proposed subset. Most people understand
|
are valid classifications even when the subset size results in overall fufilled terms to be infrequently categorized as the proposed subset. Most people understand
|
||||||
these expressions but, when shown a table and how to calculate those ratios, the content enters the realm of collegiate instruction.
|
these expressions but, when shown a table and how to calculate those ratios, the content enters the realm of collegiate instruction.
|
||||||
|
|
||||||
\subsubsection{Bayes Theorem}
|
\subsubsection{Bayes Theorem}\label{Bayes Theorem}
|
||||||
|
|
||||||
The equation for Bayes Theorem is as follows:
|
The equation for Bayes Theorem is as follows:
|
||||||
|
|
||||||
@@ -334,6 +334,68 @@ applications of Bayes Theory calculate posterior probabilities continuously as n
|
|||||||
Bayesian Belief Networks are probablistic graphical models that preserve conditional dependence between random variables. In spite of its name,
|
Bayesian Belief Networks are probablistic graphical models that preserve conditional dependence between random variables. In spite of its name,
|
||||||
Bayesian Belief Networks do not necessarily apply Bayesian models, though they are a way to utilize Bayes Theorem for domains with greater complexity beyond a
|
Bayesian Belief Networks do not necessarily apply Bayesian models, though they are a way to utilize Bayes Theorem for domains with greater complexity beyond a
|
||||||
single posterior probability. In this type of network, edges are directed and the structure is utilized in a single direction. This is in contrast to undirected
|
single posterior probability. In this type of network, edges are directed and the structure is utilized in a single direction. This is in contrast to undirected
|
||||||
Hidden Markov Models that do not assume the order of aquisition of random variables.
|
Hidden Markov Models (to be covered in the next unit) that do not assume the order of aquisition of random variables. While it may not be practical to calculate
|
||||||
|
the full conditional probability of a variable, Bayesian Belief Networks allow us to identify conditionally dependent variables that are weighted on the basis of
|
||||||
|
an earlier random variable.
|
||||||
|
|
||||||
|
Following the example in the Bayes Theorem section of this report (\ref{Bayes Theorem}), let's suppose that a patient with a positive test takes a hypothetical
|
||||||
|
second test whose results are partially dependent on the first as they measure overlapping biological markers. In this case, the results of the first test
|
||||||
|
is relevant to the second test:
|
||||||
|
\vskip 5pt
|
||||||
|
\begin{center}
|
||||||
|
\begin{tikzpicture}
|
||||||
|
\draw[black, thick] (-2, 4.5) rectangle (2, 5.5);
|
||||||
|
\node at (0, 5) (bio) {Biological Markers};
|
||||||
|
|
||||||
|
\draw[black, thick] (-1.5, 3) circle (0.75);
|
||||||
|
\node at (-1.5, 3) (T1) {Test 1};
|
||||||
|
|
||||||
|
\draw[black, thick] (1.5, 3) circle (0.75);
|
||||||
|
\node at (1.5, 3) (T2) {Test 2};
|
||||||
|
|
||||||
|
\draw[black, thick] (-2, 0) rectangle (2, 1);
|
||||||
|
\node at (0, 0.5) (DepRes) {Dependent Results};
|
||||||
|
|
||||||
|
% Draw arrows from the bottom of the circles to the top of the rectangle
|
||||||
|
\draw[->] (T1.south) -- (DepRes.north);
|
||||||
|
\draw[->] (T2.south) -- (DepRes.north);
|
||||||
|
\draw[->] (bio.south) -- (T1.north);
|
||||||
|
\draw[->] (bio.south) -- (T2.north);
|
||||||
|
\end{tikzpicture}
|
||||||
|
\end{center}
|
||||||
|
|
||||||
|
\vskip 5pt
|
||||||
|
\begin{center}
|
||||||
|
\vskip 5pt
|
||||||
|
\begin{tabular}{| c | c | c |}
|
||||||
|
\hline
|
||||||
|
Test 1 Result & Test 2 Result & P(A) \\
|
||||||
|
\hline\hline
|
||||||
|
\multicolumn{3}{| c |}{Prior beliefs of test 1} \\
|
||||||
|
\hline
|
||||||
|
Unknown & Unknown & 10\% \\
|
||||||
|
Positive & Unknown & 67.857\% \\
|
||||||
|
Negative & Unknown & 0.581\% \\
|
||||||
|
\hline
|
||||||
|
\multicolumn{3}{| c |}{Prior beliefs of test 2} \\
|
||||||
|
\hline
|
||||||
|
Unknown & Positive & 55\% \\
|
||||||
|
Unknown & Negative & 1\% \\
|
||||||
|
\hline
|
||||||
|
\multicolumn{3}{| c |}{Dependent results from both tests} \\
|
||||||
|
\hline
|
||||||
|
Positive & Positive & 75\% \\
|
||||||
|
Positive & Negative & 1.5\% \\
|
||||||
|
Negative & Positive & 0.6\% \\
|
||||||
|
Negative & Negative & 0.087\% \\
|
||||||
|
\hline
|
||||||
|
\end{tabular}
|
||||||
|
\end{center}
|
||||||
|
Note that this probability of positive results in both tests (which both have greater than 50\% of positives being true positives) is as certain as two positives
|
||||||
|
from two completely independent tests with 50\% of positives being true. If the partial dependence was not included in the calculation, as would have occured in a
|
||||||
|
Naive Bayes model, the model's listed accuracy would be inflated.
|
||||||
|
|
||||||
|
\newpage
|
||||||
|
\section{Unit 4: Markov Chains}
|
||||||
|
|
||||||
\end{document}
|
\end{document}
|
||||||
Reference in New Issue
Block a user