|
|
|
We found and applied the scale factors of each samples. To get this we used a maximal likelihood fit of the MC samples to the data.
|
|
|
|
|
|
|
|
# Theoretical description about the method
|
|
|
|
|
|
|
|

|
|
|
|
|
|
|
|
# The structure of Higgs combinedLimit tool
|
|
|
|
|
|
|
|
Higgs combinedLimit tool is a toolbox for several statistical works.
|
|
|
|
In this analysis we used a fit with Barlow-Beeston-lite method in this tool, especially with the following programs.
|
|
|
|
|
|
|
|
`script/text2workspace.py`: A python program which generates configurations for statistical task, such as PDF for ML fit, etc..
|
|
|
|
`exe/combine`: The main program of this toolbox; it establishes the actual work of statistical ways (fit, limit-estimation, etc.) from the output of `script/text2workspace.py`.
|
|
|
|
|
|
|
|
## Notes on `script/text2workspace.py`
|
|
|
|
|
|
|
|
The tasks which this python code does can be summarized as the following procedures:
|
|
|
|
1. Parsing a given data card
|
|
|
|
2. Performing a given physics model
|
|
|
|
3. Gathering POIs/NPs (POI: Parameter Of Interest, NP: Nuisance parameter)
|
|
|
|
4. Synthesizing pdf (for ML fit) and variables in RooFit and RooStat framework
|
|
|
|
|
|
|
|
I'll not mention about 1st process because it's too obvious (see `python/DatacardParser.py`), of which the start point of this can be found in the script; `DC = parseCard(file, options)`.
|
|
|
|
The physics model is loaded in the very last of the script, just before the `MB.doModel()`.
|
|
|
|
These can be found in `python/PhysicsModel.py`.
|
|
|
|
Especially, for the fit we use `multiSignalModel`, because this model is good for setting and using several POIs.
|
|
|
|
The other procedures are performed in the line `MB.doModel()`, or the method `doModel()` of `ModelBuilder` class (see `pyton/ModelTools.py`).
|
|
|
|
The core part of this method is:
|
|
|
|
|
|
|
|
```
|
|
|
|
self.doObservables()
|
|
|
|
self.physics.doParametersOfInterest()
|
|
|
|
...
|
|
|
|
self.physics.preProcessNuisances(self.DC.systs)
|
|
|
|
self.doNuisances()
|
|
|
|
self.doExtArgs()
|
|
|
|
self.doRateParams()
|
|
|
|
self.doExpectedEvents()
|
|
|
|
self.doIndividualModels()
|
|
|
|
self.doNuisancesGroups() # this needs to be called after both doNuisances and doIndividualModels
|
|
|
|
self.doCombination()
|
|
|
|
self.runPostProcesses()
|
|
|
|
```
|
|
|
|
|
|
|
|
Note that for our task of fit `ShapeBuilder` class, of which the parent is `ModelBuilder` (see `python/ShapeTools.py`), is used.
|
|
|
|
|
|
|
|
Several settings for parameters are done before `doCombination()`, including the 3rd procedure.
|
|
|
|
You can check how the nuisance parameters (especially systematic uncertainties) are imported in `doNuisances()` which processes parsing configurations for nuisance parameters and creating the variable and pdfs for constraint in RooFit and RooStat.
|
|
|
|
If you have no idea what a given type of systematic uncertainty or nuisance parameter does or you want to seek a function you want, it is highly recommended to read the function `doNuisances()` carefully.
|
|
|
|
Note that the prior constraint of parameters is set in this procedure, even including that of POIs.
|
|
|
|
Also note that the nuisance parameter for the MC statistics is not treated in this function; you can find it in `doIndividualModels()` of `ShapeBuilder` class.
|
|
|
|
|
|
|
|
After the arranging of parameters (POI, NP. MC statistics), the synthesis of pdf is performed in `doCombination()` in `ShapeBuilder` class; putting together all of (Poisson) pdfs for each bins and constraining pdfs into one single pdf.
|
|
|
|
If you run the script with a high verbose (>= 4), the script will show the final pdf.
|
|
|
|
For example, if one runs with the following card
|
|
|
|
```
|
|
|
|
imax 1
|
|
|
|
jmax 1
|
|
|
|
kmax *
|
|
|
|
---------------
|
|
|
|
shapes * * multibin_input_simpler.root $PROCESS_$CHANNEL $PROCESS_$CHANNEL_$SYSTEMATIC
|
|
|
|
---------------
|
|
|
|
bin bin1
|
|
|
|
observation 15
|
|
|
|
------------------------------
|
|
|
|
bin bin1 bin1
|
|
|
|
process sig1_bin1 bkg1_bin1
|
|
|
|
process 0 1
|
|
|
|
rate 8 7
|
|
|
|
--------------------------------
|
|
|
|
r1_constr constr r1-1.0 1.0
|
|
|
|
a1_constr constr a1-1.0 1.0
|
|
|
|
|
|
|
|
lumi lnN 1.10 1.10
|
|
|
|
|
|
|
|
bin1 autoMCStats 0
|
|
|
|
```
|
|
|
|
with the following command
|
|
|
|
```
|
|
|
|
python scripts/text2workspace.py -P HiggsAnalysis.CombinedLimit.PhysicsModel:multiSignalModel --PO 'map=bin1/sig1_bin1:r1[1,0,10]' --PO 'map=bin1/bkg1_bin1:a1[1,0,10]' simple-shapes-simpler_TH1.txt
|
|
|
|
```
|
|
|
|
|
|
|
|
and with `multibin_input_simpler.root` which has two 2-binned histograms `sig1_bin1_bin1`, `bkg1_bin1_bin1` and `data_obs_bin1`, you might obtain the following tree of pdfs (with massive mounts of other outputs):
|
|
|
|
|
|
|
|
```
|
|
|
|
p.d.f.s
|
|
|
|
-------
|
|
|
|
RooSimultaneousOpt::model_b[...
|
|
|
|
...
|
|
|
|
RooSimultaneousOpt::model_s[ indexCat=CMS_channel bin1=pdf_binbin1 extraConstraints=() channelMasks=() ] = 7
|
|
|
|
RooProdPdf::pdf_binbin1[ r1_constr_Pdf * a1_constr_Pdf * lumi_Pdf * pdf_binbin1_nuis * pdfbins_binbin1 ] = 7
|
|
|
|
RooRealSumPdf::pdf_binbin1_nuis[ ONE * prop_binbin1 ] = 7
|
|
|
|
CMSHistErrorPropagator::prop_binbin1[ x=CMS_th1x funcs=(shapeSig_sig1_bin1_bin1_rebinPdf,shapeBkg_bkg1_bin1_bin1_rebinPdf) coeffs=(n_exp_binbin1_proc_sig1_bin1,n_exp_binbin1_proc_bkg1_bin1) binpars=(prop_binbin1_bin0,prop_binbin1_bin1) ] = 7
|
|
|
|
CMSHistFunc::shapeSig_sig1_bin1_bin1_rebinPdf[ x=CMS_th1x vmorphs=() hmorphs=() ] = 5
|
|
|
|
CMSHistFunc::shapeBkg_bkg1_bin1_bin1_rebinPdf[ x=CMS_th1x vmorphs=() hmorphs=() ] = 2
|
|
|
|
ProcessNormalization::n_exp_binbin1_proc_sig1_bin1[ thetaList=(lumi) asymmThetaList=() otherFactorList=(r1) ] = 1
|
|
|
|
ProcessNormalization::n_exp_binbin1_proc_bkg1_bin1[ thetaList=(lumi) asymmThetaList=() otherFactorList=(a1) ] = 1
|
|
|
|
RooProdPdf::pdfbins_binbin1[ prop_binbin1_bin0_Pdf * prop_binbin1_bin1_Pdf ] = 1
|
|
|
|
SimpleGaussianConstraint::prop_binbin1_bin0_Pdf[ x=prop_binbin1_bin0 mean=prop_binbin1_bin0_In sigma=1 ] = 1
|
|
|
|
SimpleGaussianConstraint::prop_binbin1_bin1_Pdf[ x=prop_binbin1_bin1 mean=prop_binbin1_bin1_In sigma=1 ] = 1
|
|
|
|
RooGaussian::r1_constr_Pdf[ x=r1_constr_In mean=r1_constr_Func sigma=r1_constr_S ] = 1
|
|
|
|
RooFormulaVar::r1_constr_Func[ actualVars=(r1) formula="r1-1.0" ] = 0
|
|
|
|
RooGaussian::a1_constr_Pdf[ x=a1_constr_In mean=a1_constr_Func sigma=a1_constr_S ] = 1
|
|
|
|
RooFormulaVar::a1_constr_Func[ actualVars=(a1) formula="a1-1.0" ] = 0
|
|
|
|
SimpleGaussianConstraint::lumi_Pdf[ x=lumi mean=lumi_In sigma=1 ] = 1
|
|
|
|
RooProdPdf::nuisancePdf[ r1_constr_Pdf * a1_constr_Pdf * lumi_Pdf ] = 1
|
|
|
|
RooGaussian::r1_constr_Pdf[ x=r1_constr_In mean=r1_constr_Func sigma=r1_constr_S ] = 1
|
|
|
|
RooFormulaVar::r1_constr_Func[ actualVars=(r1) formula="r1-1.0" ] = 0
|
|
|
|
RooGaussian::a1_constr_Pdf[ x=a1_constr_In mean=a1_constr_Func sigma=a1_constr_S ] = 1
|
|
|
|
RooFormulaVar::a1_constr_Func[ actualVars=(a1) formula="a1-1.0" ] = 0
|
|
|
|
SimpleGaussianConstraint::lumi_Pdf[ x=lumi mean=lumi_In sigma=1 ] = 1
|
|
|
|
```
|
|
|
|
|
|
|
|
(You may not need to consider `model_b`.)
|
|
|
|
|
|
|
|
It is the pdf for ML fit.
|
|
|
|
You can see `model_s`, an object of `RooSimultaneousOpt` (which is a slightly wrapped object of `RooSimultaneous`), which wraps 'bins' into one pdf.
|
|
|
|
Note that, in this example we used only one 'bin' (see the 'bin' line in the card); if one uses several bins, you can see several `RooProdPdf::pdf_bin*` in `model_s`.
|
|
|
|
`RooProdPdf::pdf_binbin1` corresponds to the factors with $`\textrm{obs. bin $i$}`$ in the likelihood given in the above.
|
|
|
|
You can see the pdf of each of source histograms, such as `CMSHistFunc::shapeSig_sig1_bin1_bin1_rebinPdf` and `CMSHistFunc::shapeBkg_bkg1_bin1_bin1_rebinPdf`, and constraints for POIs and nuisance parameters, such as `RooProdPdf::pdfbins_binbin1` for constraint in Barlow-Beeston-lite method and `RooGaussian::r1_constr_Pdf`, `RooGaussian::a1_constr_Pdf` for prior constraints on POIs and `SimpleGaussianConstraint::lumi_Pdf` for a nuisance parameter from a systematic uncertainty.
|
|
|
|
|
|
|
|
With this the fit is performed by `exe/combine` program.
|
|
|
|
We will talk more about `CMSHistErrorPropagator` in the next section.
|
|
|
|
|
|
|
|
## Notes on `exe/combine`
|
|
|
|
|
|
|
|
You can find the `main()` of this program in `exe/combine.cc`, which is an interface-wrapper of the main class `Combine` in `src/Combine.cc` and run of the core function `Combine::run()`.
|
|
|
|
In the following, I'll focus on the details of ML fitting with profile likelihood and Barlow-Beeston-lite method.
|
|
|
|
|
|
|
|
One can follow the following stream of functions to watch the construction of NLL (Negative Log-Likelihood).
|
|
|
|
|
|
|
|
`main()`
|
|
|
|
-> `Combine::run()`
|
|
|
|
-> `Combine::mklimit()` (Note: with `nToys <= 0`)
|
|
|
|
-> `FitterAlgoBase::run()` (Note: `algo->run()`)
|
|
|
|
-> `MultiDimFit::runSpecific()`
|
|
|
|
-> `FitterAlgoBase::doFit()`
|
|
|
|
|
|
|
|
Finally, in `doFit()` one can find `pdf.createNLL()`.
|
|
|
|
The following contents are only spared for the calculation of this NLL.
|
|
|
|
The core of NLL can be found in `src/CachingNLL.cc` and `src/CMSHistErrorPropagator.cc`.
|
|
|
|
|
|
|
|
`CMSHistErrorPropagator` class has a role of estimation of error propagation with profile likelihood method (see [1] for the detail about this method, especially Theorem 3.1.1 for the error propagation of nuisance parameters).
|
|
|
|
|
|
|
|
## How to treat systematic uncertainties: MC statistics
|
|
|
|
First of all, the Barlow-Beeston-lite method (see [2] and [3]) is treated in `CMSHistErrorPropagator::runBarlowBeeston()`;
|
|
|
|
|
|
|
|
```
|
|
|
|
for (unsigned j = 0; j < n; ++j) {
|
|
|
|
bb_.b[j] = bb_.toterr[j] + (bb_.valsum[j] / bb_.toterr[j]) - bb_.gobs[j];
|
|
|
|
bb_.c[j] = bb_.valsum[j] - bb_.dat[j] - (bb_.valsum[j] / bb_.toterr[j]) * bb_.gobs[j];
|
|
|
|
bb_.tmp[j] = -0.5 * (bb_.b[j] + copysign(1.0, bb_.b[j]) * std::sqrt(bb_.b[j] * bb_.b[j] - 4. * bb_.c[j]));
|
|
|
|
bb_.x1[j] = bb_.tmp[j];
|
|
|
|
bb_.x2[j] = bb_.c[j] / bb_.tmp[j];
|
|
|
|
bb_.res[j] = std::max(bb_.x1[j], bb_.x2[j]);
|
|
|
|
}
|
|
|
|
```
|
|
|
|
|
|
|
|
This is basically a (maximal) solution of the following equation:
|
|
|
|
|
|
|
|
```math
|
|
|
|
x^2 + \left( \sigma + \frac{\mu}{\sigma} - g \right) x + \left( \mu - n - \frac{\mu}{\sigma} g \right) = 0,
|
|
|
|
```
|
|
|
|
|
|
|
|
where $`n`$ is `bb_.dat[j]` and $`\mu`$ is `bb_.valsum[j]` and $`\sigma`$ is `bb_.toterr[j]` and $`g`$ is `bb_gobs[j]`.
|
|
|
|
If we put $`g = 0`$ (actually, I still have no idea what `bb_gobs[j]` is...), this equation is for the minimization of this:
|
|
|
|
|
|
|
|
```math
|
|
|
|
\begin{aligned}
|
|
|
|
-\log{L} &= -n \log{(\mu + \sigma x)} + (\mu + \sigma x) + \frac{x^2}{2} \\
|
|
|
|
&= -\log{\left( \textrm{Poisson}(n | \mu + \sigma x) \textrm{Gaussian}(0 | x, 1) \right)} + \textrm{const.}
|
|
|
|
\end{aligned}
|
|
|
|
```
|
|
|
|
|
|
|
|
This root is calculated when the NLL (Negative Log-Likelihood) is obtained (see `CMSHistErrorPropagator::updateCache()`; `updateCache()` is used for evaluation of objects in RooFit framework) and put to $`x_{ij}`$ in $`\mu_{ij}`$.
|
|
|
|
Thus, we have watched how the Barlow-Beeston-lite method within the profile likelihood method works.
|
|
|
|
|
|
|
|
## How to treat systematic uncertainties: Multiplicative uncertainties and shape uncertainties
|
|
|
|
|
|
|
|

|
|
|
|
|
|
|
|
According to Conway([3]), the uncertainties can be classified in two cases: Multiplicative uncertainties and shape uncertainties.
|
|
|
|
The former one is involved in overall scale factors (for each processes or all processes), such as the uncertainty from luminosity.
|
|
|
|
Other uncertainties can be regarded as shape uncertainties, the uncertainties which is related to morphs of distributions which is not uniform, such as JER and JES uncertainties.
|
|
|
|
|
|
|
|
Conway treats all of them in the specific way.
|
|
|
|
These are described in the above slides; see the formula of $`p_{rij}`$.
|
|
|
|
But there are some differences from the original way in [3], which Conway also suggests in short.
|
|
|
|
|
|
|
|
- The factor for multiplicative uncertainty is not a simple factor, or not in a form of $`(1 + \sigma \phi)`$ ($`\phi`$ the nuisance parameter and $`\sigma`$ the uncertainty of luminosity, for example).
|
|
|
|
Actually, that form can yield a zero or negative factor, which is unphysical and harmful for the minimizer.
|
|
|
|
To prevent this, Conway suggests (in short) to use log-normal distribution, which is realized in the Higgs combine tool, as described in the slide.
|
|
|
|
|
|
|
|
- The function $`g`$ is for representing the morphing along the given uncertainty.
|
|
|
|
The morphing is controlled by one nuisance parameter $`\theta`$; if it is 0, +1, -1, then the distribution will be exactly same as the nominal, 'up-shift', 'down-shift' distribution, respectively.
|
|
|
|
Also, we do not want too large deviations when the nuisance parameter is outside of [-1, 1] so that we want the behavior of the function in that region to be linear.
|
|
|
|
To make these all, $`g`$ is defined as a polynomial satisfying the above conditions within $`\theta \in [-1, 1]`$, while it behaves a linear function when the parameter is not in the interval (see the slide).
|
|
|
|
One can see that in the slide there are more conditions; the derivatives of $`g`$.
|
|
|
|
It is to make the function smooth (not *smooth* in mathematical sense, but smooth enough), but it is not given in [3] explicitly (Conway takes over this to readers as an exercise).
|
|
|
|
Eventually, the function can be written as the follow:
|
|
|
|
```math
|
|
|
|
g(\theta, a^+, a^-) = \left\{
|
|
|
|
\begin{array}{ll}
|
|
|
|
\frac{a^+ - a^-}{2} \theta + \frac{a^+ + a^-}{16} \left( 15 \theta^2 - 10 \theta^4 + 3 \theta^6 \right) && -1 \le t \le t \\
|
|
|
|
-a^- \theta && t < -1 \\
|
|
|
|
a^+ \theta && t > 1 \\
|
|
|
|
\end{array}
|
|
|
|
\right.
|
|
|
|
```
|
|
|
|
|
|
|
|
With these treatments and parameterizations, all uncertainties, including MC statistics uncertainties, are profiled in the fit procedure.
|
|
|
|
For example, the following card is a modified one of the above card, which is now containing the indications to the shape uncertainty.
|
|
|
|
|
|
|
|
```
|
|
|
|
imax 1
|
|
|
|
jmax 1
|
|
|
|
kmax *
|
|
|
|
---------------
|
|
|
|
shapes * * multibin_input_simpler.root $PROCESS_$CHANNEL $PROCESS_$CHANNEL_$SYSTEMATIC
|
|
|
|
---------------
|
|
|
|
bin bin1
|
|
|
|
observation 15
|
|
|
|
------------------------------
|
|
|
|
bin bin1 bin1
|
|
|
|
process sig1_bin1 bkg1_bin1
|
|
|
|
process 0 1
|
|
|
|
rate 8 7
|
|
|
|
--------------------------------
|
|
|
|
r1_constr constr r1-1.0 1.0
|
|
|
|
a1_constr constr a1-1.0 1.0
|
|
|
|
|
|
|
|
lumi lnN 1.10 1.10
|
|
|
|
uncTest shape 1.00 -
|
|
|
|
|
|
|
|
bin1 autoMCStats 0
|
|
|
|
```
|
|
|
|
|
|
|
|
Now you can see a new line `uncTest shape 1.10 -` which is for a shape uncertainty, named `uncTest`.
|
|
|
|
With this line the root file (`multibin_input_simpler.root`) must have histograms named `bin1_sig1_bin1_uncTest1Up` and `bin1_sig1_bin1_uncTest1Down`.
|
|
|
|
If you want to endow the uncertainty also to `bkg1_bin1` process, user needs to change `-` on the `uncTest ...` line to `1.0` and to add histograms named `bin1_bkg1_bin1_uncTest1Up` and `bin1_bkg1_bin1_uncTest1Down`.
|
|
|
|
The p.d.f. given above is changed when one multiplicative uncertainty and one shape uncertainty are given:
|
|
|
|
|
|
|
|
```
|
|
|
|
p.d.f.s
|
|
|
|
-------
|
|
|
|
RooSimultaneousOpt::model_b[...
|
|
|
|
...
|
|
|
|
RooSimultaneousOpt::model_s[ indexCat=CMS_channel bin1=pdf_binbin1 extraConstraints=() channelMasks=() ] = 7
|
|
|
|
RooProdPdf::pdf_binbin1[ r1_constr_Pdf * a1_constr_Pdf * lumi_Pdf * uncTest_Pdf * pdf_binbin1_nuis * pdfbins_binbin1 ] = 7
|
|
|
|
RooRealSumPdf::pdf_binbin1_nuis[ ONE * prop_binbin1 ] = 7
|
|
|
|
CMSHistErrorPropagator::prop_binbin1[ x=CMS_th1x funcs=(shapeSig_bin1_sig1_bin1_morph,shapeBkg_bkg1_bin1_bin1_rebinPdf) coeffs=(n_exp_final_binbin1_proc_sig1_bin1,n_exp_binbin1_proc_bkg1_bin1) binpars=(prop_binbin1_bin0,prop_binbin1_bin1) ] = 7
|
|
|
|
CMSHistFunc::shapeSig_bin1_sig1_bin1_morph[ x=CMS_th1x vmorphs=(uncTest) hmorphs=() ] = 5
|
|
|
|
CMSHistFunc::shapeBkg_bkg1_bin1_bin1_rebinPdf[ x=CMS_th1x vmorphs=() hmorphs=() ] = 2
|
|
|
|
RooProduct::n_exp_final_binbin1_proc_sig1_bin1[ n_exp_binbin1_proc_sig1_bin1 * systeff_bin1_sig1_bin1_uncTest ] = 1
|
|
|
|
AsymPow::systeff_bin1_sig1_bin1_uncTest[ kappaLow=0.975000 kappaHigh=1.025000 theta=uncTest ] = 1
|
|
|
|
ProcessNormalization::n_exp_binbin1_proc_sig1_bin1[ thetaList=(lumi) asymmThetaList=() otherFactorList=(r1) ] = 1
|
|
|
|
ProcessNormalization::n_exp_binbin1_proc_bkg1_bin1[ thetaList=(lumi) asymmThetaList=() otherFactorList=(a1) ] = 1
|
|
|
|
RooProdPdf::pdfbins_binbin1[ prop_binbin1_bin0_Pdf * prop_binbin1_bin1_Pdf ] = 1
|
|
|
|
SimpleGaussianConstraint::prop_binbin1_bin0_Pdf[ x=prop_binbin1_bin0 mean=prop_binbin1_bin0_In sigma=1 ] = 1
|
|
|
|
SimpleGaussianConstraint::prop_binbin1_bin1_Pdf[ x=prop_binbin1_bin1 mean=prop_binbin1_bin1_In sigma=1 ] = 1
|
|
|
|
RooGaussian::r1_constr_Pdf[ x=r1_constr_In mean=r1_constr_Func sigma=r1_constr_S ] = 1
|
|
|
|
RooFormulaVar::r1_constr_Func[ actualVars=(r1) formula="r1-1.0" ] = 0
|
|
|
|
RooGaussian::a1_constr_Pdf[ x=a1_constr_In mean=a1_constr_Func sigma=a1_constr_S ] = 1
|
|
|
|
RooFormulaVar::a1_constr_Func[ actualVars=(a1) formula="a1-1.0" ] = 0
|
|
|
|
SimpleGaussianConstraint::lumi_Pdf[ x=lumi mean=lumi_In sigma=1 ] = 1
|
|
|
|
SimpleGaussianConstraint::uncTest_Pdf[ x=uncTest mean=uncTest_In sigma=1 ] = 1
|
|
|
|
RooProdPdf::nuisancePdf[ r1_constr_Pdf * a1_constr_Pdf * lumi_Pdf * uncTest_Pdf ] = 1
|
|
|
|
RooGaussian::r1_constr_Pdf[ x=r1_constr_In mean=r1_constr_Func sigma=r1_constr_S ] = 1
|
|
|
|
RooFormulaVar::r1_constr_Func[ actualVars=(r1) formula="r1-1.0" ] = 0
|
|
|
|
RooGaussian::a1_constr_Pdf[ x=a1_constr_In mean=a1_constr_Func sigma=a1_constr_S ] = 1
|
|
|
|
RooFormulaVar::a1_constr_Func[ actualVars=(a1) formula="a1-1.0" ] = 0
|
|
|
|
SimpleGaussianConstraint::lumi_Pdf[ x=lumi mean=lumi_In sigma=1 ] = 1
|
|
|
|
SimpleGaussianConstraint::uncTest_Pdf[ x=uncTest mean=uncTest_In sigma=1 ] = 1
|
|
|
|
```
|
|
|
|
|
|
|
|
The reader can find some differences from the above p.d.f., other than the constraint p.d.f. for the nuisance parameter; first, `ProcessNormalization::n_exp_binbin1_proc_sig1_bin1`, is inserted into `n_exp_final_binbin1_proc_sig1_bin1`, and `CMSHistFunc::shapeSig_bin1_sig1_bin1_morph` contains `uncTest`, the nuisance parameter with the morphing infomations.
|
|
|
|
|
|
|
|
# References
|
|
|
|
|
|
|
|
[1] https://www.stat.tamu.edu/~suhasini/teaching613/chapter3.pdf
|
|
|
|
[2] R. Barlow and C. Beeston, "Fitting using finite Monte Carlo samples", Comp. Phys. Comm. 77 (1993) 219, http://atlas.physics.arizona.edu/~kjohns/teaching/phys586/s06/barlow.pdf
|
|
|
|
[3] J. S. Conway, “Incorporating Nuisance Parameters in Likelihoods for Multisource Spectra”, doi:10.5170/CERN-2011-006.115, https://arxiv.org/abs/1103.0354
|
|
|
|
|