Thursday, December 16, 2010

pyBLoCS is now on GitHub

The pyBLoCXS code base now has a proper home on GitHub!

Tuesday, December 14, 2010

External Python Packages in Sherpa

Tom just posted a nice tutorial on using Python packages in Sherpa on
Astropython

Monday, November 8, 2010

pyblocxs in Sherpa

pyblocxs: It is a Python Sherpa Extension for Bayesian Low Counts X-ray Spectral Analysis. It provides MCMC methods to explore a model parameter space and summarized the full posterior or profile posterior distributions. Computed parameter uncertainties can include calibration errors. It simulates replicate data from the posterior predictive distributions. It can test for added spectral components by computing the Likelihood Ratio Statistic on replicate data and the ppp-value.
It has two samplers:
(1) Metropolis-Hastings: centered on the best fit values
(2) Metropolis-Hastings mixed with Metropolis jumping rule: centered on the current draw of parameters with the prior scale specfied as a scalar multiple of the Sherpa covar() output.

http://hea-www.harvard.edu/AstroStat/pyBLoCXS/index.html

Saturday, November 6, 2010

User-defined Model and Fit Statistic

An advanced demo for ADASS 2010 illustrating a script that incorporates a user-defined model and a user-defined fit statistic with a prior into an X-ray spectral fit.

#!/usr/bin/env python
import sherpa.astro.ui as sherpa
from sherpa.models import Parameter, ArithmeticModel
from sherpa.stats import truncation_value
import numpy

class MyPowerLaw(ArithmeticModel):
"""
Class to characterize a power-law function

f(x) = ampl (x/ref) ^ (-gamma)

"""
def __init__(self, name='mypowerlaw'):

self.gamma = Parameter(name, "gamma", 1.0, min=-10, max=10)
self.ref = Parameter(name, "ref", 1.0, alwaysfrozen=True)
self.ampl = Parameter(name, "ampl", 1.0, min=0)

ArithmeticModel.__init__(self, name,
(self.gamma, self.ref, self.ampl))

def calc(self, p, x, xhi=None, *args, **kwargs):
"""
Params
`p` list of ordered parameter values.
`x` ndarray of domain values, bin midpoints or lower
bin edges.
`xhi` ndarray of upper bin edges.

returns ndarray of calculated function values.
"""

if xhi is None:
return self.point(p, x)
return self.integrated(p, x, xhi)


@staticmethod
def point(p, x):
"""
/ x_i \(-p[0])
point version, f_i(x_i) = p[2] |------|
\ p[1] /

Params
`p` list of ordered parameter values.
`x` ndarray of bin midpoints.

returns ndarray of calculated function values at
bin midpoints
"""

if xhi is None:
return p[2]*numpy.power(x/p[1], -p[0])


@staticmethod
def integrated(p, xlo, xhi):
"""

integrated form from lower bin edge to upper edge

⌠ xhi_i p[2] / /xhi \ (-p[0]) /xlo \ (-p[0]) \
| f_i(x_i) dx = ----- * |xhi |----| - xlo |----| |
⌡ xlo_i 1-p[0] \ \p[1]/ \p[1]/ /

Params
`p` list of ordered parameter values.
`xlo` ndarray of lower bin edges.
`xhi` ndarray of upper bin edges.

returns ndarray of integrated function values over
lower and upper bin edges.
"""
if p[0] == 1.0:
return p[2] * p[1] * (numpy.log(xhi) - numpy.log(xlo))

p1 = numpy.power(xlo, 1.0-p[0])/(1.0-p[0])
p2 = numpy.power(xhi, 1.0-p[0])/(1.0-p[0])
return p[2] / numpy.power(p[1], -p[0]) * (p2 - p1)

# <demo> stop

def logNormal(x, mu, sigma):
"""
Compute natural log of Normal distribution PDF analytically

Params
`x` x
`mu` mean
`sigma` standard deviation

returns ln(f_X(x; mu, sigma))
"""
sigma_sqr = sigma*sigma
return (-numpy.log(numpy.sqrt(2*numpy.pi*sigma_sqr)) -
numpy.square(x-mu)/2/sigma_sqr)

# <demo> stop

# stat interface instancemethod
def calc_stat(self, data, model, staterror=None,
syserror=None, weight=None):
"""
Calculate the fit statistic value and an array of statistic
value contributions per bin.

Params
`data` ndarray of observed data points
`model` ndarray of predicted data points
`staterror` ndarray of statistical error on observed data points
`syserror` ndarray of systematic error on observed data points
`weight` ndarray of statistic weights

returns a tuple of the fit statistic value and array of
stat contributions per bin.
"""
gamma = self.gamma.val # nH (abs1.nh)
beta = self.beta.val # powerlaw index (p1.gamma)
xi = numpy.log(self.alpha.val) # log powerlaw norm (p1.ampl)

phigamma = logNormal(gamma, self.mugamma.val, self.siggamma.val)
phibeta = logNormal(beta, self.mubeta.val, self.sigbeta.val)
phixi = logNormal(xi, self.muxi.val, self.sigxi.val)

# truncation_value used as an approximation for
# non-positive values in model
if truncation_value > 0:
model[model<=0.0] = truncation_value

# Poisson log-likelihood summation
likelihood = sum(-model + data*numpy.log(model))

# Add the prior to log-likelihood
prior = (phigamma+phibeta+phixi)
stat = likelihood + prior

# reverse sign to minimize log-likelihood!
return (-stat, numpy.ones_like(data))


# <demo> stop

sherpa.load_pha("3c273.pi")
sherpa.notice(0, 6.0)

sherpa.add_model(MyPowerLaw)
sherpa.set_source(sherpa.xsphabs.abs1 * mypowerlaw.p1)

sherpa.load_user_stat("stat", calc_stat,
priors=dict(mugamma=0.017, # hyperparameters
mubeta=1.26,
muxi=-8.5,
siggamma=1,
sigbeta=1,
sigxi=1,
gamma=abs1.nh, # fit parameters
beta=p1.gamma,
alpha=p1.ampl))

print stat

sherpa.set_stat(stat)
sherpa.set_method("neldermead")

sherpa.fit()

sherpa.plot_fit_delchi()

# <demo> stop

Estimate Flux Errors

A demo for ADASS 2010 to estimate the integrated flux from an X-ray spectral fit using simulations.

#!/usr/bin/env python
import sherpa.astro.ui as sherpa
import numpy

# load the data and fit with an absorbed power-law
sherpa.load_pha("3c273.pi")
sherpa.notice(0.5,7.0)
sherpa.subtract()

sherpa.set_stat("chi2datavar")
sherpa.set_model(sherpa.xsphabs.abs1*sherpa.xspowerlaw.p1)
abs1.nh=0.07
sherpa.freeze(abs1.nh)
sherpa.fit()

# <demo> stop

# simulate fluxes by sampling model parameters for 100 iterations.

flux100=sherpa.sample_energy_flux(0.5,7.,num=100)

# slice the flux column from simulation table.

fluxes = flux100[:,0]

print "Mean:", numpy.mean(fluxes)

print "Median:", numpy.median(fluxes)

print "Std Dev:", numpy.std(fluxes)

# determine 95% and 50% quantiles using numpy array sorting

sf=numpy.sort(fluxes)

print "95% quantile:", sf[0.95*(len(fluxes)-1)]
print "50% quantile:", sf[0.5*(len(fluxes)-1)]

# <demo> stop

# run 7.5e3 flux simulations and bin a histogram using the result

hist=sherpa.get_energy_flux_hist(0.5,7.,num=7500)
hist.histo_prefs['linecolor']=None
sherpa.plot_energy_flux(recalc=False)

# <demo> stop

# fit a Gaussian to the histogram
sherpa.load_arrays(2,hist.xlo,hist.xhi,hist.y,sherpa.Data1DInt)

sherpa.set_model(2,sherpa.gauss1d.g0)
g0.integrate=False
sherpa.guess(2,g0)
g0.ampl.min=0
g0.ampl.max=100
g0.ampl=1

sherpa.set_stat("leastsq")
sherpa.fit(2)

sherpa.plot_model(2, overplot=True)

# <demo> stop

# calculate sigma instead of FWHM
# FWHM = sigma * sqrt(8 ln 2)

fac0=numpy.sqrt(8*numpy.log(2))
sig=g0.fwhm.val/fac0

print "Sigma: ",sig,"Position: ", g0.pos.val

# write binned histogram arrays to file

sherpa.save_arrays("hist.dat",[hist.xlo,hist.xhi,hist.y],
fields=["xlo","xhi","y"], clobber=True)

# <demo> stop

Friday, November 5, 2010

Simple Simulation

A simple demo for ADASS 2010 to fake data with Poisson noise and fit.

#!/usr/bin/env python

import sherpa.astro.ui as sherpa

# create simple grid [0.1, 10] with a bin size of 0.01
sherpa.dataspace1d(0.1,10,0.01, dstype=sherpa.Data1D)

# <demo> stop

# shorthand for normgauss1d model
gauss = sherpa.normgauss1d

# define reference model as sum of two normalized gaussians
sherpa.set_model(gauss.ref1+gauss.ref2)

# initialize parameter values
ref1.pos = 2
ref1.fwhm = 2.5

# define area under curve ref1
ref1.ampl = 75

ref2.pos = 8

ref2.fwhm = 5

# define area under curve ref2
ref2.ampl = 125

sherpa.plot_model()

# <demo> stop

# evaluate the model, add poisson noise, and populate the dataspace
sherpa.fake()

# plot faked data as 1D line plot with errorbars
sherpa.plot_data()

# <demo> stop

# fit the faked data with new instances of normgauss1d

sherpa.set_model(gauss.g1+gauss.g2)

# narrow the parameter space boundaries according to data
sherpa.guess(g1)

g1.pos = 2

sherpa.freeze(g1)

sherpa.guess(g2)
g2.pos = 8

sherpa.fit()

sherpa.plot_fit()

# <demo> stop

sherpa.thaw(g1)

sherpa.fit()

sherpa.plot_fit()

# <demo> stop

# Run confidence to estimate limits of best-fit parameter values
sherpa.conf()

# Overplot reference model to show goodness-of-fit
# The reference model is show in orange compared to fitted model in red.
sherpa.set_model(ref1+ref2)

sherpa.get_model_plot_prefs()['linecolor']='aquamarine'
sherpa.plot_model(overplot=True)

# <demo> stop

Wednesday, September 15, 2010

Sherpa Mac Disk Image

I'm happy to announce a new deployment option for Sherpa. The latest version of Sherpa, 4.2.2, built as a Mac disk image for OSX 10.5 Leopard using native (meaning Non-CIAO) python.org 2.6 and NumPy installations. This experimental build is an effort to minimize the difficulties in building Sherpa's dependencies. This build includes XSPEC 12.6.0 models (why it's so large), CIAO region file support (including FITS), CIAO dynamic grouping, and WCS support using WCSSUBS. The user is still responsible for optional packages such as DS9, XPA, matplotlib, and pyFITS. The goal of this project is to utilize the Mac simple GUI installer and install Sherpa as easily as the SciPy and NumPy disk images.

We are currently working on a Snow Leopard/universal version with upgrades to Python 2.7 and NumPy 1.5.0.

http://cxc.harvard.edu/contrib/sherpa/downloads/sherpa-4.2.3-py2.6-python.org-macosx10.5-i386.dmg

Monday, July 19, 2010

New Sherpa Released!

New version of Sherpa was released today!

The standard CIAO version is available on Sherpa CIAO web page . The standalone version for Python users is available on Sherpa v2 page
Many enhancements include user interface to models including the support for combination of convolved and non-convolved model expressions, advanced user models, working with PSFs and table models.

Thursday, June 10, 2010

Vinay's Cheat Sheet

Vinay has been writing a Sherpa Cheat Sheet on AstroStat Slog. This is a good list of basic "how to" tutorials for Sherpa.

User-defined Sherpa model types using Python Classes

Here is a quick example of a native Sherpa model (logparlaw1d) defined as a Python class. This class can then be registered as Sherpa model type using the add_model() function (found in Sherpa 4.2.2). The class attribute, name, can then be used as a model type within a standard Sherpa model expression.


To python users, instantiation of this class is obscured within the Sherpa model expression syntax. The add_model() function simply adds this class to Sherpa's function factory for creating model instances. The variable, mylogparlaw, in the local namespace is actually the Sherpa function factory. This variable's __getattr__() method has been overloaded to create instances of MyLogParLaw using the name of the attribute requested. Think of the following declarations as comparable.


lp1 = mylogparlaw.lp1


versus


lp1 = MyLogParLaw('lp1')



See the script below for more details


#!/usr/bin/env python

from sherpa.astro.ui import *
from sherpa.models import ArithmeticModel, Parameter
import numpy as np


class MyLogParaLaw(ArithmeticModel):
"""


/ x \ - p[1] - p[2] * log10(x/p[0])
f(x) = p[3] |----|
\p[0]/

"""
def __init__(self, name='mylogparlaw'):
# p[0]
self.ref = Parameter(name, 'ref', 1, alwaysfrozen=True)

# p[1]
self.c1 = Parameter(name, 'c1', 1)

# p[2]
self.c2 = Parameter(name, 'c2', 1)

# p[3]
self.ampl = Parameter(name, 'ampl', 1, 0)

ArithmeticModel.__init__(self, name, (self.ref,self.c1,
self.c2,self.ampl))


def calc(self, p, xlo, xhi=None, *args, **kwargs):
frac = (xlo/p[0])
exponent = -p[1] - p[2] * np.log10(frac)
return p[3] * np.power(frac, exponent)

if __name__ == '__main__':

# register the class MyLogParLaw as a Sherpa model type
add_model(MyLogParLaw)

# Set the Sherpa model expression as the sum of MyLogParLaw
# and Const1D
set_model(mylogparlaw.lp1+const1d.c1)

# Initialize the parameter values
c1.c0 = 10
lp1.ampl = 75
lp1.c1 = 0.5
lp1.c2 = 1.75
lp1.ref = 2.1
print lp1

# Define a data space grid to be from [0.1, 10] in intervals
# of 0.01
dataspace1d(0.1,10,0.01, dstype=Data1D)

# Evaluate the model over set of initialized parameters, add
# poisson noise, and set as data space values.
fake()

# Use Chi2 with data variance as fit statistic
set_stat('chi2datavar')

# Set the statistical error of data space as 10% of faked
# data values
set_staterror(0.1, fractional=True)

# Plot faked data space values with statistical errorbars
plot_data()






The plot of faked data should look something like the image above.

Friday, May 21, 2010

Using a private DS9 Session in Sherpa

Currently, Sherpa uses the default DS9 session 'ds9' and this can potentially conflict with other open DS9 windows that are unrelated to Sherpa analysis. To enable private DS9 sessions in Sherpa, consider the following work-around.

import sherpa.image
sherpa.image.backend.DS9._DefTemplate="sherpa"

A very quick note on 2D user models

Our documentation currently doesn't help people out who want to create a 2D user model in Sherpa. Here's a quick run example of using load_user_model() to create a flat model in Python (so a replacement for Sherpa's const2d model).

% sherpa
-----------------------------------------------------
Welcome to Sherpa: CXC's Modeling and Fitting Package
-----------------------------------------------------
CIAO 4.2 Monday, November 30, 2009

sherpa-1> def flatmod(params, x, y):
print("Lower left = {0},{1}".format(x[0],y[0]))
print("Upper right = {0},{1}".format(x[-1],y[-1]))
return x*0 + params[0]
sherpa-2> load_user_model(flatmod, "reallyflat")
sherpa-3> add_user_pars("reallyflat", ["ampl"])
sherpa-4> print(reallyflat)
usermodel.reallyflat
Param Type Value Min Max Units
----- ---- ----- --- --- -----
reallyflat.ampl thawed 0 -3.40282e+38 3.40282e+38
sherpa-5> reallyflat.ampl = 2

So here I have created the model reallyflat which sets every pixel to the value of 2 (i.e. it ignores the coordinates of the input). As a side effect it prints out the first and last coordinate sent to it (which will be the bottom-left and top-right pixels if there are no image filters). I created the model on the fly from within Sherpa for demonstration purposes; in normal use I would write the model definition in an external file and load it in using execfile() or import.

Now we load in an image and use the model:

sherpa-6> load_data("img.fits")
sherpa-7> set_source(reallyflat)
sherpa-8> image_data()
sherpa-9> image_model()
Lower left = 1.0,1.0
Upper right = 421.0,367.0

Displaying the model causes it to be evaluated, as shown above. If we use image_source() we see the same result (we have no PSF set up so there is no actual difference between the source and model views in this case).

sherpa-10> image_source()
Lower left = 1.0,1.0
Upper right = 421.0,367.0

The default coordinate system is "logical", which matches the FITS convention, where (1,1) is the center of the bottom-left pixel and each pixel is square with a width of 1. Below we change to the "physical" system, which uses the ACIS SKY coordinates (as this is an ACIS image). Note that the coordinate values sent to the model have changed.

sherpa-11> set_coord("physical")
sherpa-12> image_source()
Lower left = 4064.0,3511.0
Upper right = 4484.0,3877.0
sherpa-13> image_model()
Lower left = 4064.0,3511.0
Upper right = 4484.0,3877.0
sherpa-14> image_fit()
Lower left = 4064.0,3511.0
Upper right = 4484.0,3877.0
Lower left = 4064.0,3511.0
Upper right = 4484.0,3877.0

The image_fit() output above shows that the model is evaluated twice; once for the model display and once to create the residual image.

An example of a slightly-more complex 2D model would be:

def mygauss2d(pars,x,y):
"My version of a gaussian"
(sigma, ampl, xpos, ypos) = pars
r2 = (x - xpos)**2 + (y - ypos)**2
return ampl * np.exp(-r2/(sigma*sigma))

load_user_model(mygauss2d, "g2d")
add_user_pars("g2d",
["sigma", "ampl", "xpos", "ypos"],
[5,1,1,1], [0.1,0,0,0])

where I use the optional third and fourth arguments of add_user_pars() to set up both the starting and minimum values of the parameters.

Simple user-defined models using a function

Typically, users have a particular model in mind for use in Sherpa. Here is a simple user-model example of creating a gaussian plus a constant.

I define a function 'mymodel' that accepts a Python list of parameter values and a Numpy ndarray of x values. It calculates and returns the sum of a gaussian and constant as a NumPy ndarray.

I define a 1D dataspace from [0.1, 10] with a bin size of 0.01 and load the user-model function identified as 'mdl'. Using the identifier, I populate the parameter names and initial values.

Setup the model, fake the data, and plot!

#!/usr/bin/env python
from
sherpa.astro.ui import *

import numpy as np



def
mymodel(pars, x, *args, **kwargs):

"""

Evaluate a 1D Gaussian plus constant

pars = { fwhm, pos, ampl, const }

"""

gfactor = 4.0*np.log(2.0)
ratio = (x - pars[1])/pars[0]
return pars[2] * np.exp(-gfactor * ratio * ratio) + pars[3]


# create simple grid [0.1, 10] with a bin size of 0.01

dataspace1d(0.1,10,0.01,dstype=Data1D)


# load the user-model mymodel() with parameter names into

# variable 'mdl'

load_user_model(mymodel,'mdl')

add_user_pars('mdl',['fwhm', 'pos', 'ampl', 'const'])


# initialize parameter values

mdl.pos = 5.0

mdl.fwhm = 2.5

mdl.ampl = 75

mdl.const = 0.1


# set the identifier as the associated model

set_model(mdl)


# evaluate, add poisson noise,
and populate the
# dataspace

fake()


# plot faked data

plot_data()

So, can I shut Sherpa up?

Say you want to run a bunch of Sherpa commands (maybe in a script, or in an interactive session) but don't want to see the output from the commands. What should you do?

Well you can take advantage of the not-yet-documented-in-CIAO-but-really-should-be Python logging infrastructure used by Sherpa. Start with

import logging
logger = logging.getLogger("sherpa")

and then you turn off the output using one of

logger.setLevel(logging.WARN)
logger.setLevel(logging.ERROR)

and turn it back on again with

logger.setLevel(logging.INFO)

The argument determines what severity of messages are displayed (so logging.WARN means warning and above, which excludes the INFO level used for most output from Sherpa).

As an example, compare the screen output for the three load_data() calls below:

% sherpa
-----------------------------------------------------
Welcome to Sherpa: CXC's Modeling and Fitting Package
-----------------------------------------------------
CIAO 4.2 Monday, November 30, 2009

sherpa-1> import logging
sherpa-2> logger = logging.getLogger("sherpa")
sherpa-3> load_pha("3c273.pi")
WARNING: systematic errors were not found in file '3c273.pi'
statistical errors were found in file '3c273.pi' but not used; to use them, re-read with use_errors=True
read ARF file 3c273.arf
read RMF file 3c273.rmf
WARNING: systematic errors were not found in file '3c273_bg.pi'
statistical errors were found in file '3c273_bg.pi' but not used; to use them, re-read with use_errors=True
read background file 3c273_bg.pi
sherpa-4> logger.setLevel(logging.WARN)
sherpa-5> load_pha("3c273.pi")
WARNING: systematic errors were not found in file '3c273.pi'
WARNING: systematic errors were not found in file '3c273_bg.pi'
sherpa-6> logger.setLevel(logging.ERROR)
sherpa-7> load_pha("3c273.pi")
sherpa-8> logger.setLevel(logging.INFO)

This can be particularly useful for hiding the on-screen results of a call to fit() or one of the error routines such as conf() and proj(); if you do this though you will probably want to use the corresponding get_xxx_results routine to get at the calculated data!

You can do a lot more with this "logger" object, such as re-directing the output to a file (or even to both the screen and a file). See 'help logging' or this logging tutorial (one of many that are out there for you intrepid readers) for more information.

Fake data FAST!

Sometimes, users just what to simulate a simple model and plot. This can be done quickly with five easy steps. Define the grid, the model, the initial model parameter values, add noise, and plot.

Example:

#!/usr/bin/env python
from
sherpa.astro.ui import *

# create simple grid [0.1, 10] with a
# bin size of 0.01

dataspace1d(0.1,10,0.01, dstype=Data1D)


# define a gaussian model plus a constant

set_model(gauss1d.g1+const1d.c1)


# initialize parameter values

g1.pos = 5

g1.fwhm = 2.5

g1.ampl = 75

c1.c0 = 0.1


# evaluate the model, add poisson noise,
# and populate the dataspace

fake()

# plot faked data

plot_data()

Wednesday, May 12, 2010

New Sherpa this Summer

It looks like we will be releasing a new Sherpa version this summer. Standby for updates.

Tuesday, May 11, 2010

Installing Standalone Sherpa for Python.

Tom has put Instructions on Astropython  to install standalone Sherpa for X-ray analysis. Note that this is mainly useful for Python users who want to import Sherpa to their Python environment.

Monday, May 10, 2010

Combine Sherpa Models with Deprojected Models in Sherpa

Yesterday, I used a nice trick in Sherpa to combine standard models and deprojected models. Using  Deproject Python package in Sherpa one can fit the emission models to X-ray clusters accounting for the 3D to 2D projection effects in the data (standard "onion-peeling" technique described by Fabian et al 1981, and Kriss et al 1983). This allows for obtaining deprojected temperature, density and entropy distributions in the cluster. Sometimes, a combination of deprojected and standard model components needs to be used and  I could use get_source() and set_source() to make such combinations. The trick was to setup the deprojected model components first and then use get_source() to parse the model expression into a new set_source() expression where I added an extra model component. 

          dep.set_source("xsmekal")
         set_source(0,get_source(0)+powlaw1d.p1)
         set_source(1,get_source(1)+powlaw1d.p1)     
         set_source(2,get_source(2)+powlaw1d.p2)
         set_source(3,get_source(3)+powlaw1d.p2)

This is a part of my script where I insert the expression for each data set by hand. Of course there must be a nicer way to do this with loops etc. 
The deprojected source expression is set by dep.set_source() for all data sets as explained in the Deproject documentation. Here I used just a standard XSPEC mekal model.  Note that there are 2 annuli and each of them has 2 data sets. I add the same power law model component (p1 and p2) to each annulus. 


get_model() allows you to inspect the full model expression.


 

Thursday, April 29, 2010

Define parameter grid and calculate statistics - steppar functionality in Sherpa

It is trivial to define a grid in Sherpa Python and then calculate the statistic values for set of par1, par2 parameters:

low_lim1=0.1
up_lim1=2.0
nstep1=10
low_lim2=1.5
up_lim2=3.0
nstep2=20
par1_grid=numpy.linspace(low_lim1, up_lim1, nstep1)
par2_grid=numpy.linspace(low_lim2, up_lim2, nstep2)

#Now calculate the statistics for all the parameters on the grid

stat_vals=numpy.zeros((nstep1, nstep2))

for i, parval1 in enumerate(par1_grid):
    set_par(par1, parval1)
    for j, parval2 in enumerate(par2_grid):
        set_par(par2, parval2)
        stat_vals[i,j] = calc_stat()
        print '%d %.3f %d %.3f %f' % (i, parval1, j, parval2, stat_vals[i, j])
    

Probing Parameter Space using Sampler in Sherpa

There is no fully Bayesian sampler inside Sherpa yet, however, there is a way to sample parameter space after the fit converged. The main assumption is that sampling starts from the best fit parameter values, the free parameters have Normal distributions with the scales  based on the covariance matrix with the off-diagonal elements to account for correlations.  The function "sample_energy_flux()" returns an array with sampled parameters and calculated flux for these parameters.

The thread   http://cxc.harvard.edu/sherpa/threads/flux_dist/index.py.html

describes how to use the functions to calculate the flux errors.

For the parameter errors it is easy to check the distribution of parameters returned by the sample_energy_flux()

Example:
sample100=sample_energy_flux(0.5,2.,num=1000)
print sample100
[...
[  2.90602963e-10   1.10100135e+00   8.39761377e-01   7.01873798e-01
   2.35626905e+00   1.03667988e+00]...
]

The first column contains the flux value for all the parameters in the other columns.

We can easily calculate 95% bounds for a given parameter:

sherpa-100> print sample100[:,1]
----------> print(sample100[:,1])
[ 1.10121073  1.1011715   1.10100135  1.10191317  1.10233657  1.10103386
  1.10360142  1.09774949  1.0984214   1.09927707  1.10041681  1.09919926

sherpa-102> min(sample100[:,1])
            1.0972469907219351
sherpa-103> max(sample100[:,1])
            1.1065345259787218

We can  use numpy to obtain the properties of the parameter distribution:
sherpa-104> numpy.median(sample100[:,1])
            1.1015496488552339
sherpa-105> numpy.mean(sample100[:,1])
            1.1015685708445886
sherpa-106> numpy.std(sample100[:,1])
            0.0015635753706038143

We can also sort and make a histogram and plot the distribution:
sort_par=numpy.sort(sample100[:,1])
sort_par[0.99*len(sample100[:,1])-1]

# make a histogram of parameter value and plot:
xspace=numpy.linspace(min(sample100[:,1]),max(sample100[:,1]),30)
xlo=xspace[:-1]
xhi=xspace[1:]
parhist=histogram1d(sample100[:,1],xlo,xhi)
parhist100=parhist/1000.
add_histogram(xlo,parhist100)
print_window("hist.png")

The histogram can be saved, but I could not place the figure into the Blogger - a known issue with the Google Blogger editor.