\documentclass[12pt]{article} \usepackage{blindtext} \usepackage{hyperref} \usepackage{amsmath} \usepackage{amssymb} \usepackage[a4paper, total={6in, 10in}]{geometry} \usepackage{setspace} \setstretch{1.25} \hyphenpenalty 1000 \begin{document} \begin{titlepage} \begin{center} \vspace*{5cm} \Large{\textbf{Implementations of Probability Theory}}\\ \rule{14cm}{0.05cm}\\ \vspace{.25cm} \Large{Independent Study Report}\\ \large{Andrew Simonson} \vspace*{\fill} \large{Compiled on: \today}\\ \end{center} \end{titlepage} \newpage % Table of Contents % \large{Table of Contents} \tableofcontents \addtocontents{toc}{~\hfill\textbf{Page}\par} \newpage % Begin report \section{Objective} The educational focus of Implementations of Probability Theory surrounds the application of data models that produce non-deterministic insights through probabilistic methodology. By pursuing this study I hope to gain a deeper understanding of how to apply data in risk calculation for mitigation scenarios as they appear in real life, rather than the experimental lab conditions that enable algorithmic certainty. In contrast to the path of black-box artificial intelligence and algorithms taught in \textbf{CSCI 335: Machine Learning}, this study is tailored to methods designed to produce confidence levels for uncertain events using certain terms, leveraging logical, traceable, and definite, calculations. Current course offerings in the realm of data science focus largely on the storing and management of data, and it is noted that the cluster of data science was until very recently under the branding of data management. Implementations of Probability Theory is intended to extend learnings in previous courses, notably \textbf{CSCI 420: Principles of Data Mining}, for more advanced algorithms used at the intersection of data and computing after the preprocessing stage. After beginning this study the intended deliverable outline was determined to be technically implausible and has been replaced with demonstrations of applied algorithms. Taking inspiration from the retinal mosaic as displayed in \textbf{CSCI 431: Intro to Computer Vision} and discussion in \textbf{IGME 589: Computational Creativity and Algorithmic Art} on the appearance and nature of randomness in graphics, I hope to create a program that can determine the liklihood that randomly distributed colors on a hexagonal grid appear as they do in an image. \newpage \section{Units} \rule{14cm}{0.05cm} \subsection{Unit 1: Statistics Review} To ensure a strong statistical foundation for the future learnings in probabilistic models, the first objective was to create a document outlining and defining key topics that are prerequisites for probabilities in statistics or for understanding generic analytical models. \subsubsection{Random Variables} \begin{enumerate} \item \textbf{Discrete Random Variables - }values are selected by chance from a countable (including countably infinite) list of distinct values \item \textbf{Continuous Random Variables - }values are selected by chance with an uncountable number of values within its range \end{enumerate} \subsubsection{Sample Space} A sample space is the set of all possible outcomes of an instance. For a six-sided dice roll event, the die may land with 1 through 6 dots facing upwards, hence: \[S = [1, 2, 3, 4, 5, 6] \quad\text{where }S\text{ is the sample space}\] \subsubsection{Probability Axioms} There are three probability axioms: \begin{enumerate} \item \textbf{Non-negativity}: \[ P(A) \geq 0 \quad \text{for any event }A, \ P(A) \in \mathbb{R} \] No event can be less likely to occur than an impossible event ( \(P(A) = 0\) ). P(A) is a real number. Paired with axiom 2 we can also conclude that \(P(A) \leq 1\). \item \textbf{Normalization}: \[ P(S) = 1\quad\text{where }S\text{ is the sample space} \] \textbf{Unit Measure - } All event probabilities in a sample space add up to 1. In essence, there is a 100\% chance that one of the events in the sample space will occur. \item \textbf{Additivity}: \[ P(A \cup B) = P(A) + P(B) \quad \text{if } A \cap B = \emptyset \] A union between events that are mutually exclusive (events that cannot both happen for an instance) has a probability that is the sum of the associated event probabilities. \end{enumerate} \subsubsection{Expectations and Deviation} \begin{enumerate} \item \textbf{Expectation - }The weighted average of the probabilities in the sample space \[\sum_{}^{S}{P(A) * A} = E \quad\text{where }E\text{ is the expected value}\] \item \textbf{Variance - }The spread of possible values for a random variable, calculated as: \[\sigma^{2}=\frac{\sum(X - \mu)^{2}}{N}\] Where \(N\) is the population size, \(\mu\) is the population average, and \(X\) is each value in the population.\\ For samples, variance is calculated with \textbf{Bessel's Correction}, which increases the variance to avoid overfitting the sample: \[s^{2}=\frac{\sum(X - \bar{x})^{2}}{n - 1}\] \item \textbf{Standard Deviation - }The square root of the variance, giving a measure of the average distance of each data point from the mean in the same units as the data. \[\sigma = \sqrt{V}\quad\text{where variance is }V\] \end{enumerate} \subsubsection{Probability Functions} Probability Functions map the likelihood of random variables to be a specific value. \subsubsection*{Probability Mass Functions} Probability Mass Functions (PMFs) map discrete random variables. For example, a six-sided die roll creates a uniform random PMF: \begin{equation*} P(A) = \begin{cases} 1/6\qquad\text{if }&X=1\\ 1/6&X=2\\ 1/6&X=3\\ 1/6&X=4\\ 1/6&X=5\\ 1/6&X=6\\ \end{cases} \end{equation*} \subsubsection*{Probability Density Functions} Probability Density Functions (PDFs) map continuous random variables. For example, this is a PDF where things happen. \begin{equation*} P(A) = \begin{cases} X\qquad\qquad\text{if }&0\leq X\leq .5\\ -X+1&.5