# Bayesian Decision Model

Go to the meat of the post

I come from a deep-learning as a fancy tool generation, and when I first started to learn about generative models, I was pretty stumped. Now equipped with hours and hours of procrastination, discussions with my peers, reading papers, and a bunch of tutorials, I got my fundamentals ready. So this blog should ensure that you - the hunter of AWS and Google Cloud Credits or just an avid learner gets a head-start.

It’s recommended that the readers have a basic knowledge of generative models in perception. If not, I would like to point the readers here.

My aim throughout this blog-post would be to give you, the reader, a sense of the theory of Baye’s decision models as used in behavioral experiments in psychology and neuroscience. Being a firm believer of Feynman’s, “What you cannot create (he meant code), you do not understand”, I provide a way to implement this model.

Lastly, if you were looking for Bayesian Data Analysis, this is the wrong blog. But your dear author is working on it in future, stay tuned.

Our first two variables of interest are $x$ and $s$, where $x$ is a noisy measurement of $s$, and $s$ constitutes the states of the world.

An incoming arrow implies $(S\rightarrow X)$ that the distribution on $x$ is a conditional one. In this model of the brain, the noisy measurement (observation $x$) is generated from the latent stimulus space in the observer’s model (aka the model of the brain) and is described by $p(x \vert s)$ Now we add a third variable (which divides this stimulus space $s$ into categories $C$. $(C\rightarrow s \rightarrow x)$. An example of such a division of space could be a stimulus moving left vs right. Then $p(C)$ implies the belief of the subject that the observed stimulus is left or right moving. Similar to above, $s$ is defined by the conditional $p(s \vert C)$. By marginalizing over s, we can define

In an experiment, the subject is going to infer the cause $(C)$ of the observed noisy measurement $(x)$, hence we compute $p(C \vert x).$ By simply using Baye’s rule, this is:

Having beliefs over different categories given the sensory noisy measurement i.e. $p(C = C_i \vert x)$, the subject makes a choice $Ch$ $(x \rightarrow Ch)$. This variable is observed and is encircled in the graphic. The choice originates as an action/ response from comparing different $p(C = C_i \vert x)$ and is the decision making part of a Bayesian Decision Modeling. Some examples of these decision rules are: For our final showdown, the experimenter provides the subject with a stimulus $(e)$, which induces $x$ in the first place. This is also an observed variable and hence encircled. As an experimenter, we are interested in the quantity $p(Ch \vert e)$, which is the probability that the observer will make a choice given the experimenter controlled stimulus $e.$ Marginalizing over the noisy measurements,

To expand the above equation, we are computing the response probability $p(Ch = Ch_i \vert x)$ by calculating the frequency of choices over different sensory measurements, which are in turn generated from the experimenter provided stimulus $e$.

### Putting theory into code

Coming soon!!

Much of this post is shaped by discussions with Sabyasachi Shivkumar and Ralf Haefner, Richard’s blog, and Wei Ji Ma’s Cosyne Tutorial as well as his Primer on Bayesian Decision Models.