===========================================================
                                      .___ __  __   
          _________________  __ __  __| _/|__|/  |_ 
         / ___\_` __ \__  \ |  |  \/ __ | | \\_  __\
        / /_/  >  | \// __ \|  |  / /_/ | |  ||  |  
        \___  /|__|  (____  /____/\____ | |__||__|  
       /_____/            \/           \/           
              grep rough audit - static analysis tool
                  v2.8 written by @Wireghoul
=================================[justanotherhacker.com]===
r-cran-bms-0.3.4/inst/doc/bms.Rnw-277-	\item Empirical Bayes $g$ -- local (\verb+EBL+): $g_\gamma=arg max_g \; p(y|M_\gamma,X,g)$. Authors such as \citet{george00} or \citet{hansen01} advocate an 'Empirical Bayes' approach by using information contained in the data $(y,X)$ to elicit $g$ via maximum likelihood. This amounts to setting $g_\gamma=\max(0,F^{OLS}_\gamma-1)$ where $F^{OLS}_\gamma$ is the standard OLS F-statistic for model $M_\gamma$. Apart from obvious advantages discussed below, the \verb+EBL+ prior is not so popular since it involves 'peeking' at the data in prior formulation. Moreover, asymptotic 'consistency' of BMA is not guaranteed in this case.
r-cran-bms-0.3.4/inst/doc/bms.Rnw:278:	\item Hyper-$g$ prior (\verb+hyper+): \citet{liang:mgp} propose putting a hyper-prior $g$; In order to arrive at closed-form solutions, they suggest a Beta prior on the shrinkage factor of the form $\frac{g}{1+g} \sim Beta \left(1, \frac{a}{2}-1 \right)$, where $a$ is a parameter in the range $2 < a \leq 4$. Then, the prior expected value of the shrinkage factor is $E(\frac{g}{1+g})=\frac{2}{a}$. Moreover, setting $a=4$ corresponds to uniform prior distribution of $\frac{g}{1+g}$ over the interval $[0,1]$, while $a \rightarrow 2$ concentrates prior mass very close to unity (thus corresponding to $g\rightarrow \infty$). (\verb+bms+ allows to set $a$ via the argument \verb+g="hyper=x"+, where \verb+x+ denotes the $a$ parameter.) 
r-cran-bms-0.3.4/inst/doc/bms.Rnw-279-	The virtue of the hyper-prior is that it allows for prior assumptions about $g$, but relies on Bayesian updating to adjust it. This limits the risk of unintended consequences on the posterior results, while retaining the theoretical advantages of a fixed $g$. Therefore \citet{fz:superM} prefer the use of hyper-$g$ over other available $g$-prior frameworks.
##############################################
r-cran-bms-0.3.4/inst/doc/bms.Rnw-296-
r-cran-bms-0.3.4/inst/doc/bms.Rnw:297:The above results show that using a flexible and model-specific prior on \citet{fls:ccg} data results in rather small posterior estimates of $\frac{g}{1+g}$, thus indicating that the \verb+g="BRIC"+ prior used in \verb+fls_combi+ may be set too far from zero. This interacts with the uniform model prior to concentrate posterior model mass on quite large models. However, imposing a uniform model prior means to expect a model size of $K/2=20.5$, which may seem overblown. Instead, try to impose smaller model size through a corresponding model prior -- e.g. impose a prior model size of 7 as in \citet{bace04}. This can be combined with a hyper-$g$ prior, where the argument \verb+g="hyper=UIP"+ imposes an $a$ parameter such that the prior expected value of $g$ corresponds to the unit information prior ($g=N$).\footnote{This is the default hyper-g prior and may therefore be as well obtained with \texttt{g=\textquotedbl hyper \textquotedbl}. }
r-cran-bms-0.3.4/inst/doc/bms.Rnw-298-<<>>=
##############################################
r-cran-bms-0.3.4/R/plotConv.R-22-    if (as.logical(include.legend)) 
r-cran-bms-0.3.4/R/plotConv.R:23:        legend("topright", lty = eval(dotargs$lty), legend = c("PMP (MCMC)", 
r-cran-bms-0.3.4/R/plotConv.R:24:            "PMP (Exact)"), col = eval(dotargs$col), ncol = 2, 
r-cran-bms-0.3.4/R/plotConv.R:25:            bty = "n", cex = 1, lwd = eval(dotargs$lwd))
r-cran-bms-0.3.4/R/plotConv.R-26-}
##############################################
r-cran-bms-0.3.4/R/plotModelsize.R-82-        grid()
r-cran-bms-0.3.4/R/plotModelsize.R:83:    points(kvec[ksubset + 1], cex = 0.8, pch = eval(dotargs$pch))
r-cran-bms-0.3.4/R/plotModelsize.R-84-    axis(1, las = 1, at = 1:length(ksubset), labels = ksubset, 
r-cran-bms-0.3.4/R/plotModelsize.R:85:        cex.axis = eval(dotargs$cex.axis))
r-cran-bms-0.3.4/R/plotModelsize.R-86-    if (include.legend) {
r-cran-bms-0.3.4/R/plotModelsize.R-87-        if (is.null(prior) || all(is.na(prior))) {
r-cran-bms-0.3.4/R/plotModelsize.R:88:            legend(x = "topright", lty = eval(dotargs$lty), legend = c("Posterior"), 
r-cran-bms-0.3.4/R/plotModelsize.R:89:                col = eval(dotargs$col), ncol = 1, bty = "n", 
r-cran-bms-0.3.4/R/plotModelsize.R:90:                lwd = eval(dotargs$lwd))
r-cran-bms-0.3.4/R/plotModelsize.R-91-        }
r-cran-bms-0.3.4/R/plotModelsize.R-92-        else {
r-cran-bms-0.3.4/R/plotModelsize.R:93:            legend(x = "topright", lty = eval(dotargs$lty), legend = c("Posterior", 
r-cran-bms-0.3.4/R/plotModelsize.R:94:                "Prior"), col = eval(dotargs$col), ncol = 2, 
r-cran-bms-0.3.4/R/plotModelsize.R:95:                bty = "n", lwd = eval(dotargs$lwd))
r-cran-bms-0.3.4/R/plotModelsize.R-96-        }
##############################################
r-cran-bms-0.3.4/vignettes/bms.Rnw-277-	\item Empirical Bayes $g$ -- local (\verb+EBL+): $g_\gamma=arg max_g \; p(y|M_\gamma,X,g)$. Authors such as \citet{george00} or \citet{hansen01} advocate an 'Empirical Bayes' approach by using information contained in the data $(y,X)$ to elicit $g$ via maximum likelihood. This amounts to setting $g_\gamma=\max(0,F^{OLS}_\gamma-1)$ where $F^{OLS}_\gamma$ is the standard OLS F-statistic for model $M_\gamma$. Apart from obvious advantages discussed below, the \verb+EBL+ prior is not so popular since it involves 'peeking' at the data in prior formulation. Moreover, asymptotic 'consistency' of BMA is not guaranteed in this case.
r-cran-bms-0.3.4/vignettes/bms.Rnw:278:	\item Hyper-$g$ prior (\verb+hyper+): \citet{liang:mgp} propose putting a hyper-prior $g$; In order to arrive at closed-form solutions, they suggest a Beta prior on the shrinkage factor of the form $\frac{g}{1+g} \sim Beta \left(1, \frac{a}{2}-1 \right)$, where $a$ is a parameter in the range $2 < a \leq 4$. Then, the prior expected value of the shrinkage factor is $E(\frac{g}{1+g})=\frac{2}{a}$. Moreover, setting $a=4$ corresponds to uniform prior distribution of $\frac{g}{1+g}$ over the interval $[0,1]$, while $a \rightarrow 2$ concentrates prior mass very close to unity (thus corresponding to $g\rightarrow \infty$). (\verb+bms+ allows to set $a$ via the argument \verb+g="hyper=x"+, where \verb+x+ denotes the $a$ parameter.) 
r-cran-bms-0.3.4/vignettes/bms.Rnw-279-	The virtue of the hyper-prior is that it allows for prior assumptions about $g$, but relies on Bayesian updating to adjust it. This limits the risk of unintended consequences on the posterior results, while retaining the theoretical advantages of a fixed $g$. Therefore \citet{fz:superM} prefer the use of hyper-$g$ over other available $g$-prior frameworks.
##############################################
r-cran-bms-0.3.4/vignettes/bms.Rnw-296-
r-cran-bms-0.3.4/vignettes/bms.Rnw:297:The above results show that using a flexible and model-specific prior on \citet{fls:ccg} data results in rather small posterior estimates of $\frac{g}{1+g}$, thus indicating that the \verb+g="BRIC"+ prior used in \verb+fls_combi+ may be set too far from zero. This interacts with the uniform model prior to concentrate posterior model mass on quite large models. However, imposing a uniform model prior means to expect a model size of $K/2=20.5$, which may seem overblown. Instead, try to impose smaller model size through a corresponding model prior -- e.g. impose a prior model size of 7 as in \citet{bace04}. This can be combined with a hyper-$g$ prior, where the argument \verb+g="hyper=UIP"+ imposes an $a$ parameter such that the prior expected value of $g$ corresponds to the unit information prior ($g=N$).\footnote{This is the default hyper-g prior and may therefore be as well obtained with \texttt{g=\textquotedbl hyper \textquotedbl}. }
r-cran-bms-0.3.4/vignettes/bms.Rnw-298-<<>>=
##############################################
r-cran-bms-0.3.4/debian/tests/run-unit-test-2-oname=bms
r-cran-bms-0.3.4/debian/tests/run-unit-test:3:pkg=r-cran-`echo $oname | tr [A-Z] [a-z]`
r-cran-bms-0.3.4/debian/tests/run-unit-test-4-
r-cran-bms-0.3.4/debian/tests/run-unit-test-5-if [ "$AUTOPKGTEST_TMP" = "" ] ; then
r-cran-bms-0.3.4/debian/tests/run-unit-test:6:  AUTOPKGTEST_TMP=`mktemp -d /tmp/${pkg}-test.XXXXXX`
r-cran-bms-0.3.4/debian/tests/run-unit-test-7-fi
##############################################
r-cran-bms-0.3.4/debian/tests/run-unit-test-10-for rnw in `ls *.[rR]nw` ; do
r-cran-bms-0.3.4/debian/tests/run-unit-test:11:rfile=`echo $rnw | sed 's/\.[rR]nw/.R/'`
r-cran-bms-0.3.4/debian/tests/run-unit-test-12-R --no-save <<EOT