You can pass the data `(x, y, sigmay)` and the parameters
`(m, b)` in blobs (Python "tuples") by doing things like

# pack data = (x, y, sigmay) # pack up data blob params = (m, b) # pack up parameter blob # call newparams = metropolis_hastings_step( params, data ) # unpack (m, b) = newparams # unpack new parameter blob

In the main loop, make sure you call the `step` function
with the *output of the previous step*. For example, your code
could look something like

(m, b) = (2.2, 30) chain = [] for i in range(nsteps): # over-write old (m, b) with output of step function (m, b) = metropolis_hastings_step( (m, b), (x, y, sigmay) ) # save output in chain chain.append( (m, b) ) # turn "chain" into a numpy array (it's currently a list) chain = array(chain) # clear the plot window clf() # histogram the m values. 50 bins. hist( chain[:,0], 50 ) savefig('mcmc-mhist.png')

If you don't want to fiddle with step sizes, try a *m* step size
of about 0.04 and a *b* step size of about 4.

If you *do*: In general, you want to have an acceptance
fraction of about 0.5. You can instrument
your `metropolis_hastings_step` with "meta data" like this:

return (new_params, 1) else: return (old_params, 0)and then gather the 1 and 0 values to measure the acceptance fraction like this:

(outparams, accept) = metropolis_hastings_step(params, data) (m, b) = outparams chain.append(outparams) n_accept += accept print "acceptance fraction", n_accept / float(nsteps)

If your output looks like it is "remembering" the initial
conditions (starting point), plot a histogram of *only the second
half* of the chain with

clf() hist( chain[nsteps/2:,0], 50 ) savefig('mcmc-mhist.png')

Your `sample_new_params` function might look like this:

def sample_new_params(old_params): (m,b) = old_params # unpack the old_params # add random "jitters" to m and b. return (m, b)

Then your `posterior_probability` function might look like this:

def posterior_probability(params, data): (m, b) = params # unpack the params (x, y, sigmay) = data # unpack the data # use a flat (improper!) prior for now... prior = 1. likelihood = straight_line_gaussian_likelihood(x, y, sigmay, m, b) return prior * likelihood

And you'll probably call the `metropolis_hastings_step`
function like this:

params = (m, b) data = (x, y, sigmay) new_params = metropolis_hastings_step(params, data)

You can get Gaussian random numbers by adding this near the top of your code file,
**under** the other `import` statements:

import numpy.random as randomThen when you need a Gaussian-distributed random number (mean 0, variance 1), do:

e = random.normal()For a uniform random number in the range 0 to 1, do:

e = random.uniform()