===========================================================
                                      .___ __  __   
          _________________  __ __  __| _/|__|/  |_ 
         / ___\_` __ \__  \ |  |  \/ __ | | \\_  __\
        / /_/  >  | \// __ \|  |  / /_/ | |  ||  |  
        \___  /|__|  (____  /____/\____ | |__||__|  
       /_____/            \/           \/           
              grep rough audit - static analysis tool
                  v2.8 written by @Wireghoul
=================================[justanotherhacker.com]===
r-cran-bayesm-3.1-4+dfsg/vignettes/bayesm_Overview_Vignette.Rmd-132-
r-cran-bayesm-3.1-4+dfsg/vignettes/bayesm_Overview_Vignette.Rmd:133:As a minimal example, assume you'd like to perform a linear regression and that you have in your work space `y` (a vector of length $n$) and `X` (a matrix of dimension $n \times p$). For this example, we utilize the default values for `Prior` and so we do not specify the `Prior` argument. These components (`Data`, `Prior`, and `Mcmc` as well as their arguments including `R` and `nprint`) are discussed in the subsections that follow. Then the `bayesm` syntax is simply:
r-cran-bayesm-3.1-4+dfsg/vignettes/bayesm_Overview_Vignette.Rmd-134-
##############################################
r-cran-bayesm-3.1-4+dfsg/vignettes/bayesm_Overview_Vignette.Rmd-163-
r-cran-bayesm-3.1-4+dfsg/vignettes/bayesm_Overview_Vignette.Rmd:164:1. The second component, `Z`, is present but optional for all hierarchical models. `Z` is a matrix of cross-sectional unit characteristics that drive the mean responses; that is, a matrix of covariates for the individual parameters (e.g. $\beta_i$'s). For example, the model (omitting the priors) for `rhierMnlRwMixture` is:
r-cran-bayesm-3.1-4+dfsg/vignettes/bayesm_Overview_Vignette.Rmd-165-
##############################################
r-cran-bayesm-3.1-4+dfsg/vignettes/bayesm_Overview_Vignette.Rmd-195-
r-cran-bayesm-3.1-4+dfsg/vignettes/bayesm_Overview_Vignette.Rmd:196:Additional components of the `Mcmc` argument are function-specific, but typically include starting values for the algorithm. For example, the `Mcmc` argument for `runiregGibbs` takes `sigmasq` as a scalar element of the list. The Gibbs Sampler for `runiregGibbs` first draws $\beta | \sigma^2$, then draws $\sigma^2 | \beta$, and then repeats. For the first draw of $\beta$ in the MCMC chain, a value of $\sigma^2$ is required. The user can specify a value using `Mcmc$sigmasq`, or the user can omit the argument and the function will use its default (`sigmasq = var(Data$y)`).
r-cran-bayesm-3.1-4+dfsg/vignettes/bayesm_Overview_Vignette.Rmd-197-
##############################################
r-cran-bayesm-3.1-4+dfsg/vignettes/bayesm_Overview_Vignette.Rmd-207-
r-cran-bayesm-3.1-4+dfsg/vignettes/bayesm_Overview_Vignette.Rmd:208:- Matrices are returned for draws of parameters with a natural grouping. Again using `runireg` as the example, the output list includes `betadraw`, an `R/keep` $\times$ `ncol(X)` matrix for the vector of $\beta$ parameters. 
r-cran-bayesm-3.1-4+dfsg/vignettes/bayesm_Overview_Vignette.Rmd-209-
r-cran-bayesm-3.1-4+dfsg/vignettes/bayesm_Overview_Vignette.Rmd:210:    Contrary to the next bullet, draws for the parameters in a variance-covariance matrix are returned in matrix form. For example, `rmnpGibbs` implements a Gibbs Sampler for a multinomial probit model where one set of parameters is the $(p-1) \times (p-1)$ matrix $\Sigma$. The output list for `rmnpGibbs` includes the list element `sigmadraw`, which is a matrix of dimension `R/keep`$\times (p-1)*(p-1)$ with each row containing a draw (in vector form) for all the elements of the matrix $\Sigma$. `bayesm`'s `summary` and `plot` methods (see below) are designed to handle this format.
r-cran-bayesm-3.1-4+dfsg/vignettes/bayesm_Overview_Vignette.Rmd-211-
r-cran-bayesm-3.1-4+dfsg/vignettes/bayesm_Overview_Vignette.Rmd:212:- Arrays are used when parameters have a natural matrix-grouping, such that the MCMC algorithm returns `R/keep` draws of the matrix. For example, `rsurGibbs` returns a list that includes `Sigmadraw`, an $m \times m \times$`R/keep` array, where $m$ is the number of regression equations. As a second example, `rhierLinearModel` estimates a hierarchical linear regression model with a normal prior, and returns a list that includes `betadraw`, an $n \times k \times$`R/keep` array, where $n$ signifies the number of individuals (each with their own $\beta_i$) and $k$ signifies the number of covariates (`ncol(X)` = $k$).
r-cran-bayesm-3.1-4+dfsg/vignettes/bayesm_Overview_Vignette.Rmd-213-
##############################################
r-cran-bayesm-3.1-4+dfsg/vignettes/bayesm_Overview_Vignette.Rmd-408-
r-cran-bayesm-3.1-4+dfsg/vignettes/bayesm_Overview_Vignette.Rmd:409:`runireg` returns a list that we have saved in `out`. The list contains two elements, `betadraw` and `sigmasqdraw`, which you can verify by running `str(out)`. `betadraw` is an `R/keep` $\times$ `ncol(X)` ($10,000 \times 3$ with a column for each of the intercept, price, and display) dimension matrix with class `bayesm.mat`. We can analyze or summarize the marginal posterior distributions for any $\beta$ parameter or the $\sigma^2$ parameter. For example, we can plot histograms of the price coefficient (even though it is known to follow a t-distrbution, see BSM Ch. 2.8) and for $\sigma^2$. Notice how concentrated the posterior distributions are compared to their priors above.
r-cran-bayesm-3.1-4+dfsg/vignettes/bayesm_Overview_Vignette.Rmd-410-
##############################################
r-cran-bayesm-3.1-4+dfsg/vignettes/bayesm_Overview_Vignette.Rmd-455-
r-cran-bayesm-3.1-4+dfsg/vignettes/bayesm_Overview_Vignette.Rmd:456:For this example, we analyze the `margarine` dataset, which provides panel data on purchases of margarine. The data are stored in two dataframes. The first, `choicePrice`, lists the outcome of `r data(margarine); format(nrow(margarine$choicePrice), big.mark=",")` choice occasions as well as the choosing household and the prices of the `r max(margarine$choicePrice[,2])` choice alternatives. The second, `demos`, provides demographic information about the choosing households, such as their income and family size. We begin by merging the information from these two dataframes:
r-cran-bayesm-3.1-4+dfsg/vignettes/bayesm_Overview_Vignette.Rmd-457-
##############################################
r-cran-bayesm-3.1-4+dfsg/vignettes/bayesm_Overview_Vignette.Rmd-465-
r-cran-bayesm-3.1-4+dfsg/vignettes/bayesm_Overview_Vignette.Rmd:466:In this example, we will implement a multinomial logit model using `rmnlIndepMetrop`. This posterior sampling function requires `y` to be a length-$n$ vector (or an $n \times 1$ matrix) of multinomial outcomes ($1, \dots, p$). That is, each element of `y` corresponds to a choice occasion $i$ with the value of the element $y_i$ indicating the choice that was made. So if the fourth alternative was chosen on the seventh choice occasion, then $y_7 = 4$. The `margarine` data are stored in that format, and so we easily specify `y` with the following code:
r-cran-bayesm-3.1-4+dfsg/vignettes/bayesm_Overview_Vignette.Rmd-467-
##############################################
r-cran-bayesm-3.1-4+dfsg/vignettes/bayesm_Overview_Vignette.Rmd-471-
r-cran-bayesm-3.1-4+dfsg/vignettes/bayesm_Overview_Vignette.Rmd:472:`rmnlIndepMetrop` requires `X` to be an $np \times k$ matrix. That is, each alternative is listed on its own row, with a group of $p$ rows together corresponding to the alternatives available on one choice occasion. However, the `margarine` data are stored with the various choice alternatives in columns rather than rows, so reformatting is necessary. `bayesm` provides the utility function `createX` to assist with the conversion. `createX` requires the user to specify the number of choice alternatives `p` as well as the number of alternative-specific variables `na` and an $n \times$ `na` matrix of alternative-specific data `Xa` ("a" for alternative-specific). Here, we have $p=10$ choice alternatives with `na` $=1$ alternative-specific variable (price). If we were only interested in using price as a covariate, we would code:
r-cran-bayesm-3.1-4+dfsg/vignettes/bayesm_Overview_Vignette.Rmd-473-
##############################################
r-cran-bayesm-3.1-4+dfsg/vignettes/bayesm_Overview_Vignette.Rmd-479-
r-cran-bayesm-3.1-4+dfsg/vignettes/bayesm_Overview_Vignette.Rmd:480:Notice that `createX` uses $p-1$ dummy variables to distinguish the $p$ choice alternatives. As with factor variables in linear regression, one factor must be the base; the coefficients on the other factors report deviations from the base. The user may specify the base alternative using the `base` argument (as we have done above), or let it default to the alternative with the highest index. 
r-cran-bayesm-3.1-4+dfsg/vignettes/bayesm_Overview_Vignette.Rmd-481-
##############################################
r-cran-bayesm-3.1-4+dfsg/vignettes/bayesm_Overview_Vignette.Rmd-515-
r-cran-bayesm-3.1-4+dfsg/vignettes/bayesm_Overview_Vignette.Rmd:516:where $\hat{\beta}$ is the MLE, $H = \sum_i x_i A_i x_i'$, and the candidate distribution used in the Metropolis algorithm is the multivariate student t. For more detail, see Section 11 of BSM Chapter 3.
r-cran-bayesm-3.1-4+dfsg/vignettes/bayesm_Overview_Vignette.Rmd-517-
##############################################
r-cran-bayesm-3.1-4+dfsg/vignettes/bayesm_Overview_Vignette.Rmd-536-
r-cran-bayesm-3.1-4+dfsg/vignettes/bayesm_Overview_Vignette.Rmd:537:`rmnlIndepMetrop` returns a list that we have saved in `out`. The list contains 3 elements, `betadraw`, `loglike`, and `acceptr`, which you can verify by running `str(out)`. `betadraw` is a $10,000 \times 28$ dimension matrix with class `bayesm.mat`. As with the linear regression of Example 1 above, we can plot or summarize features of the the posterior distribution in many ways. For information on each marginal posterior distribution, call `summary(out)` or `plot(out)`. Because we have 28 covariates (intercepts and demographic variables make up 9 columns each and there is one column for the price variable) we omit the full set of results to save space and instead, we only present summary statistics for the marginal posterior distribution for $\beta_\text{price}$:
r-cran-bayesm-3.1-4+dfsg/vignettes/bayesm_Overview_Vignette.Rmd-538-
##############################################
r-cran-bayesm-3.1-4+dfsg/vignettes/bayesm_Overview_Vignette.Rmd-542-
r-cran-bayesm-3.1-4+dfsg/vignettes/bayesm_Overview_Vignette.Rmd:543:In addition to summary information for a marginal posterior distribution, we can plot it. We use `bayesm`'s `plot` generic function (calling `plot(out$betadraw)` would provide the same plots for all 28 $X$ variables). In the histogram, the green bars delimit a 95\% Bayesian credibility interval, yellow bars shows +/- 2 numerical standard errors for the posterior mean, and the red bar indicates the posterior mean. The subsequent two plots are a trace plot and and ACF plot.
r-cran-bayesm-3.1-4+dfsg/vignettes/bayesm_Overview_Vignette.Rmd-544-
##############################################
r-cran-bayesm-3.1-4+dfsg/vignettes/bayesm_Overview_Vignette.Rmd-548-
r-cran-bayesm-3.1-4+dfsg/vignettes/bayesm_Overview_Vignette.Rmd:549:We see that the posterior is approximately normally distributed with a mean of `r format(round(mean(out$betadraw[,10]),1), big.mark=",")` and a standard deviation of `r round(sd(out$betadraw[,10]), 2)`. The trace plot shows good mixing. The ACF plot shows a fair amount of correlation such that, even though the algorithm took `R = 10,000` draws, the Effective Sample Size (as reported in the summary stats above) is only `r format(round(summary.bayesm.mat(out$betadraw[,10])[1,5],0), big.mark = ",")`.
r-cran-bayesm-3.1-4+dfsg/vignettes/bayesm_Overview_Vignette.Rmd-550-
##############################################
r-cran-bayesm-3.1-4+dfsg/vignettes/bayesm_Overview_Vignette.Rmd-572-
r-cran-bayesm-3.1-4+dfsg/vignettes/bayesm_Overview_Vignette.Rmd:573:The `camera` data is stored in a list-of-lists format, which is the format required by `bayesm`'s posterior sampling functions for hierarchical models. This format has one list per individual with each list containing a vector `y` of choice outcomes and a matrix `X` of covariates. As with the multinomial logit model of the last example, `y` is a length-$n_i$ vector (or one-column matrix) and `X` has dimensions $n_ij \times k$ where $n_i$ is the number of choice occasions faced by individual $i$, $j$ is the number of choice alternatives, and $k$ is the number of covariates. For the `camera` data, $N=332$, $n_i=16$ for all $i$, $j=5$, and $k=10$.
r-cran-bayesm-3.1-4+dfsg/vignettes/bayesm_Overview_Vignette.Rmd-574-
##############################################
r-cran-bayesm-3.1-4+dfsg/vignettes/bayesm_Overview_Vignette.Rmd-631-
r-cran-bayesm-3.1-4+dfsg/vignettes/bayesm_Overview_Vignette.Rmd:632:Note also that we have assumed a normal "first-stage" prior distribution over the $\beta$'s. `rhierMnlRwMixture` permits a more-flexible mixture-of-normals first-stage prior (hence the "mixture" in the function name). However, for our example, we will not include this added flexibility (`Prior$ncomp = 1` below).
r-cran-bayesm-3.1-4+dfsg/vignettes/bayesm_Overview_Vignette.Rmd-633-
##############################################
r-cran-bayesm-3.1-4+dfsg/vignettes/bayesm_Overview_Vignette.Rmd-651-
r-cran-bayesm-3.1-4+dfsg/vignettes/bayesm_Overview_Vignette.Rmd:652:    - `probdraw` tells us the probability that each draw came from a particular normal component. This is relevant when there is a mixture-of-normals first-stage prior. However, since our specified prior over the $\beta$ vector is one normal distribution, `probdraw` is a $10,000 \times 1$ vector of all 1's.
r-cran-bayesm-3.1-4+dfsg/vignettes/bayesm_Overview_Vignette.Rmd-653-    
##############################################
r-cran-bayesm-3.1-4+dfsg/vignettes/bayesm_Overview_Vignette.Rmd-661-
r-cran-bayesm-3.1-4+dfsg/vignettes/bayesm_Overview_Vignette.Rmd:662:We can summarize results as before. `plot(out$betadraw)` provides plots for each variable that summarize the distributions of the individual parameters. For brevity, we provide just a histogram of posterior means for the 332 individual coefficients on wifi capability. 
r-cran-bayesm-3.1-4+dfsg/vignettes/bayesm_Overview_Vignette.Rmd-663-
##############################################
r-cran-bayesm-3.1-4+dfsg/vignettes/Constrained_MNL_Vignette.Rmd-57-
r-cran-bayesm-3.1-4+dfsg/vignettes/Constrained_MNL_Vignette.Rmd:58:The "deep" individual-specific parameters ($\beta_i^*$) are assumed to be drawn from a mixture of $M$ normal distributions with mean values driven by cross-sectional unit characteristics $Z$. That is, $\beta_i^* = z_i' \Delta + u_i$ where $u_i$ has a mixture-of-normals distribution.^[As documented in the helpfile for this function (accessible by `?bayesm::rhierMnlRwMixture`), draws from the posterior of the constrained parameters ($\beta$) can be found in the output `$betadraw` while draws from the posterior of the unconstrained parameters ($\beta^*$) are available in `$nmix$compdraw`.]
r-cran-bayesm-3.1-4+dfsg/vignettes/Constrained_MNL_Vignette.Rmd-59-
##############################################
r-cran-bayesm-3.1-4+dfsg/vignettes/Constrained_MNL_Vignette.Rmd-137-
r-cran-bayesm-3.1-4+dfsg/vignettes/Constrained_MNL_Vignette.Rmd:138:We see that the hyperprior values for constrained logit parameters are far from uninformative. As a result, `rhierMnlRwMixture` implements different default priors for parameters when sign constraints are imposed. In particular, $a_\mu=0.1$, $\nu = k + 15$, and $V = \nu*\text{diag}(d)$ where $d_i=4$ if $\beta_{ik}$ is unconstrained and $d_i=0.1$ if $\beta_{ik}$ is constrained. Additionally, $\bar{\mu}_m = 0$ if unconstrained and $\bar{\mu}_m = 2$ otherwise. As the following plots show, this yields substantially less informative hyperpriors on $\beta_{ik}^*$ without significantly affecting the hyperpriors on $\beta_{ik}$ or $\beta_{ij}$ ($j \ne k$).
r-cran-bayesm-3.1-4+dfsg/vignettes/Constrained_MNL_Vignette.Rmd-139-
##############################################
r-cran-bayesm-3.1-4+dfsg/vignettes/Constrained_MNL_Vignette.Rmd-168-
r-cran-bayesm-3.1-4+dfsg/vignettes/Constrained_MNL_Vignette.Rmd:169:Here we demonstrate the implementation of the hierarchical multinomial logit model with sign-constrained parameters. We return to the `camera` data used in Example 3 of the "`bayesm` Overview" Vignette. This dataset contains conjoint choice data for 332 respondents who evaluated digital cameras. The data are stored in a lists-of-lists format with one list per respondent, and each respondent's list having two elements: a vector of choices (`y`) and a matrix of covariates (`X`). Notice the dimensions: there is one value for each choice occasion in each individual's `y` vector but one row per alternative in each individual's `X` matrix, making `nrow(x)` = 5 $\times$ `length(y)` because there are 5 alternatives per choice occasion.
r-cran-bayesm-3.1-4+dfsg/vignettes/Constrained_MNL_Vignette.Rmd-170-
##############################################
r-cran-bayesm-3.1-4+dfsg/inst/doc/bayesm_Overview_Vignette.Rmd-132-
r-cran-bayesm-3.1-4+dfsg/inst/doc/bayesm_Overview_Vignette.Rmd:133:As a minimal example, assume you'd like to perform a linear regression and that you have in your work space `y` (a vector of length $n$) and `X` (a matrix of dimension $n \times p$). For this example, we utilize the default values for `Prior` and so we do not specify the `Prior` argument. These components (`Data`, `Prior`, and `Mcmc` as well as their arguments including `R` and `nprint`) are discussed in the subsections that follow. Then the `bayesm` syntax is simply:
r-cran-bayesm-3.1-4+dfsg/inst/doc/bayesm_Overview_Vignette.Rmd-134-
##############################################
r-cran-bayesm-3.1-4+dfsg/inst/doc/bayesm_Overview_Vignette.Rmd-163-
r-cran-bayesm-3.1-4+dfsg/inst/doc/bayesm_Overview_Vignette.Rmd:164:1. The second component, `Z`, is present but optional for all hierarchical models. `Z` is a matrix of cross-sectional unit characteristics that drive the mean responses; that is, a matrix of covariates for the individual parameters (e.g. $\beta_i$'s). For example, the model (omitting the priors) for `rhierMnlRwMixture` is:
r-cran-bayesm-3.1-4+dfsg/inst/doc/bayesm_Overview_Vignette.Rmd-165-
##############################################
r-cran-bayesm-3.1-4+dfsg/inst/doc/bayesm_Overview_Vignette.Rmd-195-
r-cran-bayesm-3.1-4+dfsg/inst/doc/bayesm_Overview_Vignette.Rmd:196:Additional components of the `Mcmc` argument are function-specific, but typically include starting values for the algorithm. For example, the `Mcmc` argument for `runiregGibbs` takes `sigmasq` as a scalar element of the list. The Gibbs Sampler for `runiregGibbs` first draws $\beta | \sigma^2$, then draws $\sigma^2 | \beta$, and then repeats. For the first draw of $\beta$ in the MCMC chain, a value of $\sigma^2$ is required. The user can specify a value using `Mcmc$sigmasq`, or the user can omit the argument and the function will use its default (`sigmasq = var(Data$y)`).
r-cran-bayesm-3.1-4+dfsg/inst/doc/bayesm_Overview_Vignette.Rmd-197-
##############################################
r-cran-bayesm-3.1-4+dfsg/inst/doc/bayesm_Overview_Vignette.Rmd-207-
r-cran-bayesm-3.1-4+dfsg/inst/doc/bayesm_Overview_Vignette.Rmd:208:- Matrices are returned for draws of parameters with a natural grouping. Again using `runireg` as the example, the output list includes `betadraw`, an `R/keep` $\times$ `ncol(X)` matrix for the vector of $\beta$ parameters. 
r-cran-bayesm-3.1-4+dfsg/inst/doc/bayesm_Overview_Vignette.Rmd-209-
r-cran-bayesm-3.1-4+dfsg/inst/doc/bayesm_Overview_Vignette.Rmd:210:    Contrary to the next bullet, draws for the parameters in a variance-covariance matrix are returned in matrix form. For example, `rmnpGibbs` implements a Gibbs Sampler for a multinomial probit model where one set of parameters is the $(p-1) \times (p-1)$ matrix $\Sigma$. The output list for `rmnpGibbs` includes the list element `sigmadraw`, which is a matrix of dimension `R/keep`$\times (p-1)*(p-1)$ with each row containing a draw (in vector form) for all the elements of the matrix $\Sigma$. `bayesm`'s `summary` and `plot` methods (see below) are designed to handle this format.
r-cran-bayesm-3.1-4+dfsg/inst/doc/bayesm_Overview_Vignette.Rmd-211-
r-cran-bayesm-3.1-4+dfsg/inst/doc/bayesm_Overview_Vignette.Rmd:212:- Arrays are used when parameters have a natural matrix-grouping, such that the MCMC algorithm returns `R/keep` draws of the matrix. For example, `rsurGibbs` returns a list that includes `Sigmadraw`, an $m \times m \times$`R/keep` array, where $m$ is the number of regression equations. As a second example, `rhierLinearModel` estimates a hierarchical linear regression model with a normal prior, and returns a list that includes `betadraw`, an $n \times k \times$`R/keep` array, where $n$ signifies the number of individuals (each with their own $\beta_i$) and $k$ signifies the number of covariates (`ncol(X)` = $k$).
r-cran-bayesm-3.1-4+dfsg/inst/doc/bayesm_Overview_Vignette.Rmd-213-
##############################################
r-cran-bayesm-3.1-4+dfsg/inst/doc/bayesm_Overview_Vignette.Rmd-408-
r-cran-bayesm-3.1-4+dfsg/inst/doc/bayesm_Overview_Vignette.Rmd:409:`runireg` returns a list that we have saved in `out`. The list contains two elements, `betadraw` and `sigmasqdraw`, which you can verify by running `str(out)`. `betadraw` is an `R/keep` $\times$ `ncol(X)` ($10,000 \times 3$ with a column for each of the intercept, price, and display) dimension matrix with class `bayesm.mat`. We can analyze or summarize the marginal posterior distributions for any $\beta$ parameter or the $\sigma^2$ parameter. For example, we can plot histograms of the price coefficient (even though it is known to follow a t-distrbution, see BSM Ch. 2.8) and for $\sigma^2$. Notice how concentrated the posterior distributions are compared to their priors above.
r-cran-bayesm-3.1-4+dfsg/inst/doc/bayesm_Overview_Vignette.Rmd-410-
##############################################
r-cran-bayesm-3.1-4+dfsg/inst/doc/bayesm_Overview_Vignette.Rmd-455-
r-cran-bayesm-3.1-4+dfsg/inst/doc/bayesm_Overview_Vignette.Rmd:456:For this example, we analyze the `margarine` dataset, which provides panel data on purchases of margarine. The data are stored in two dataframes. The first, `choicePrice`, lists the outcome of `r data(margarine); format(nrow(margarine$choicePrice), big.mark=",")` choice occasions as well as the choosing household and the prices of the `r max(margarine$choicePrice[,2])` choice alternatives. The second, `demos`, provides demographic information about the choosing households, such as their income and family size. We begin by merging the information from these two dataframes:
r-cran-bayesm-3.1-4+dfsg/inst/doc/bayesm_Overview_Vignette.Rmd-457-
##############################################
r-cran-bayesm-3.1-4+dfsg/inst/doc/bayesm_Overview_Vignette.Rmd-465-
r-cran-bayesm-3.1-4+dfsg/inst/doc/bayesm_Overview_Vignette.Rmd:466:In this example, we will implement a multinomial logit model using `rmnlIndepMetrop`. This posterior sampling function requires `y` to be a length-$n$ vector (or an $n \times 1$ matrix) of multinomial outcomes ($1, \dots, p$). That is, each element of `y` corresponds to a choice occasion $i$ with the value of the element $y_i$ indicating the choice that was made. So if the fourth alternative was chosen on the seventh choice occasion, then $y_7 = 4$. The `margarine` data are stored in that format, and so we easily specify `y` with the following code:
r-cran-bayesm-3.1-4+dfsg/inst/doc/bayesm_Overview_Vignette.Rmd-467-
##############################################
r-cran-bayesm-3.1-4+dfsg/inst/doc/bayesm_Overview_Vignette.Rmd-471-
r-cran-bayesm-3.1-4+dfsg/inst/doc/bayesm_Overview_Vignette.Rmd:472:`rmnlIndepMetrop` requires `X` to be an $np \times k$ matrix. That is, each alternative is listed on its own row, with a group of $p$ rows together corresponding to the alternatives available on one choice occasion. However, the `margarine` data are stored with the various choice alternatives in columns rather than rows, so reformatting is necessary. `bayesm` provides the utility function `createX` to assist with the conversion. `createX` requires the user to specify the number of choice alternatives `p` as well as the number of alternative-specific variables `na` and an $n \times$ `na` matrix of alternative-specific data `Xa` ("a" for alternative-specific). Here, we have $p=10$ choice alternatives with `na` $=1$ alternative-specific variable (price). If we were only interested in using price as a covariate, we would code:
r-cran-bayesm-3.1-4+dfsg/inst/doc/bayesm_Overview_Vignette.Rmd-473-
##############################################
r-cran-bayesm-3.1-4+dfsg/inst/doc/bayesm_Overview_Vignette.Rmd-479-
r-cran-bayesm-3.1-4+dfsg/inst/doc/bayesm_Overview_Vignette.Rmd:480:Notice that `createX` uses $p-1$ dummy variables to distinguish the $p$ choice alternatives. As with factor variables in linear regression, one factor must be the base; the coefficients on the other factors report deviations from the base. The user may specify the base alternative using the `base` argument (as we have done above), or let it default to the alternative with the highest index. 
r-cran-bayesm-3.1-4+dfsg/inst/doc/bayesm_Overview_Vignette.Rmd-481-
##############################################
r-cran-bayesm-3.1-4+dfsg/inst/doc/bayesm_Overview_Vignette.Rmd-515-
r-cran-bayesm-3.1-4+dfsg/inst/doc/bayesm_Overview_Vignette.Rmd:516:where $\hat{\beta}$ is the MLE, $H = \sum_i x_i A_i x_i'$, and the candidate distribution used in the Metropolis algorithm is the multivariate student t. For more detail, see Section 11 of BSM Chapter 3.
r-cran-bayesm-3.1-4+dfsg/inst/doc/bayesm_Overview_Vignette.Rmd-517-
##############################################
r-cran-bayesm-3.1-4+dfsg/inst/doc/bayesm_Overview_Vignette.Rmd-536-
r-cran-bayesm-3.1-4+dfsg/inst/doc/bayesm_Overview_Vignette.Rmd:537:`rmnlIndepMetrop` returns a list that we have saved in `out`. The list contains 3 elements, `betadraw`, `loglike`, and `acceptr`, which you can verify by running `str(out)`. `betadraw` is a $10,000 \times 28$ dimension matrix with class `bayesm.mat`. As with the linear regression of Example 1 above, we can plot or summarize features of the the posterior distribution in many ways. For information on each marginal posterior distribution, call `summary(out)` or `plot(out)`. Because we have 28 covariates (intercepts and demographic variables make up 9 columns each and there is one column for the price variable) we omit the full set of results to save space and instead, we only present summary statistics for the marginal posterior distribution for $\beta_\text{price}$:
r-cran-bayesm-3.1-4+dfsg/inst/doc/bayesm_Overview_Vignette.Rmd-538-
##############################################
r-cran-bayesm-3.1-4+dfsg/inst/doc/bayesm_Overview_Vignette.Rmd-542-
r-cran-bayesm-3.1-4+dfsg/inst/doc/bayesm_Overview_Vignette.Rmd:543:In addition to summary information for a marginal posterior distribution, we can plot it. We use `bayesm`'s `plot` generic function (calling `plot(out$betadraw)` would provide the same plots for all 28 $X$ variables). In the histogram, the green bars delimit a 95\% Bayesian credibility interval, yellow bars shows +/- 2 numerical standard errors for the posterior mean, and the red bar indicates the posterior mean. The subsequent two plots are a trace plot and and ACF plot.
r-cran-bayesm-3.1-4+dfsg/inst/doc/bayesm_Overview_Vignette.Rmd-544-
##############################################
r-cran-bayesm-3.1-4+dfsg/inst/doc/bayesm_Overview_Vignette.Rmd-548-
r-cran-bayesm-3.1-4+dfsg/inst/doc/bayesm_Overview_Vignette.Rmd:549:We see that the posterior is approximately normally distributed with a mean of `r format(round(mean(out$betadraw[,10]),1), big.mark=",")` and a standard deviation of `r round(sd(out$betadraw[,10]), 2)`. The trace plot shows good mixing. The ACF plot shows a fair amount of correlation such that, even though the algorithm took `R = 10,000` draws, the Effective Sample Size (as reported in the summary stats above) is only `r format(round(summary.bayesm.mat(out$betadraw[,10])[1,5],0), big.mark = ",")`.
r-cran-bayesm-3.1-4+dfsg/inst/doc/bayesm_Overview_Vignette.Rmd-550-
##############################################
r-cran-bayesm-3.1-4+dfsg/inst/doc/bayesm_Overview_Vignette.Rmd-572-
r-cran-bayesm-3.1-4+dfsg/inst/doc/bayesm_Overview_Vignette.Rmd:573:The `camera` data is stored in a list-of-lists format, which is the format required by `bayesm`'s posterior sampling functions for hierarchical models. This format has one list per individual with each list containing a vector `y` of choice outcomes and a matrix `X` of covariates. As with the multinomial logit model of the last example, `y` is a length-$n_i$ vector (or one-column matrix) and `X` has dimensions $n_ij \times k$ where $n_i$ is the number of choice occasions faced by individual $i$, $j$ is the number of choice alternatives, and $k$ is the number of covariates. For the `camera` data, $N=332$, $n_i=16$ for all $i$, $j=5$, and $k=10$.
r-cran-bayesm-3.1-4+dfsg/inst/doc/bayesm_Overview_Vignette.Rmd-574-
##############################################
r-cran-bayesm-3.1-4+dfsg/inst/doc/bayesm_Overview_Vignette.Rmd-631-
r-cran-bayesm-3.1-4+dfsg/inst/doc/bayesm_Overview_Vignette.Rmd:632:Note also that we have assumed a normal "first-stage" prior distribution over the $\beta$'s. `rhierMnlRwMixture` permits a more-flexible mixture-of-normals first-stage prior (hence the "mixture" in the function name). However, for our example, we will not include this added flexibility (`Prior$ncomp = 1` below).
r-cran-bayesm-3.1-4+dfsg/inst/doc/bayesm_Overview_Vignette.Rmd-633-
##############################################
r-cran-bayesm-3.1-4+dfsg/inst/doc/bayesm_Overview_Vignette.Rmd-651-
r-cran-bayesm-3.1-4+dfsg/inst/doc/bayesm_Overview_Vignette.Rmd:652:    - `probdraw` tells us the probability that each draw came from a particular normal component. This is relevant when there is a mixture-of-normals first-stage prior. However, since our specified prior over the $\beta$ vector is one normal distribution, `probdraw` is a $10,000 \times 1$ vector of all 1's.
r-cran-bayesm-3.1-4+dfsg/inst/doc/bayesm_Overview_Vignette.Rmd-653-    
##############################################
r-cran-bayesm-3.1-4+dfsg/inst/doc/bayesm_Overview_Vignette.Rmd-661-
r-cran-bayesm-3.1-4+dfsg/inst/doc/bayesm_Overview_Vignette.Rmd:662:We can summarize results as before. `plot(out$betadraw)` provides plots for each variable that summarize the distributions of the individual parameters. For brevity, we provide just a histogram of posterior means for the 332 individual coefficients on wifi capability. 
r-cran-bayesm-3.1-4+dfsg/inst/doc/bayesm_Overview_Vignette.Rmd-663-
##############################################
r-cran-bayesm-3.1-4+dfsg/inst/doc/Constrained_MNL_Vignette.Rmd-57-
r-cran-bayesm-3.1-4+dfsg/inst/doc/Constrained_MNL_Vignette.Rmd:58:The "deep" individual-specific parameters ($\beta_i^*$) are assumed to be drawn from a mixture of $M$ normal distributions with mean values driven by cross-sectional unit characteristics $Z$. That is, $\beta_i^* = z_i' \Delta + u_i$ where $u_i$ has a mixture-of-normals distribution.^[As documented in the helpfile for this function (accessible by `?bayesm::rhierMnlRwMixture`), draws from the posterior of the constrained parameters ($\beta$) can be found in the output `$betadraw` while draws from the posterior of the unconstrained parameters ($\beta^*$) are available in `$nmix$compdraw`.]
r-cran-bayesm-3.1-4+dfsg/inst/doc/Constrained_MNL_Vignette.Rmd-59-
##############################################
r-cran-bayesm-3.1-4+dfsg/inst/doc/Constrained_MNL_Vignette.Rmd-137-
r-cran-bayesm-3.1-4+dfsg/inst/doc/Constrained_MNL_Vignette.Rmd:138:We see that the hyperprior values for constrained logit parameters are far from uninformative. As a result, `rhierMnlRwMixture` implements different default priors for parameters when sign constraints are imposed. In particular, $a_\mu=0.1$, $\nu = k + 15$, and $V = \nu*\text{diag}(d)$ where $d_i=4$ if $\beta_{ik}$ is unconstrained and $d_i=0.1$ if $\beta_{ik}$ is constrained. Additionally, $\bar{\mu}_m = 0$ if unconstrained and $\bar{\mu}_m = 2$ otherwise. As the following plots show, this yields substantially less informative hyperpriors on $\beta_{ik}^*$ without significantly affecting the hyperpriors on $\beta_{ik}$ or $\beta_{ij}$ ($j \ne k$).
r-cran-bayesm-3.1-4+dfsg/inst/doc/Constrained_MNL_Vignette.Rmd-139-
##############################################
r-cran-bayesm-3.1-4+dfsg/inst/doc/Constrained_MNL_Vignette.Rmd-168-
r-cran-bayesm-3.1-4+dfsg/inst/doc/Constrained_MNL_Vignette.Rmd:169:Here we demonstrate the implementation of the hierarchical multinomial logit model with sign-constrained parameters. We return to the `camera` data used in Example 3 of the "`bayesm` Overview" Vignette. This dataset contains conjoint choice data for 332 respondents who evaluated digital cameras. The data are stored in a lists-of-lists format with one list per respondent, and each respondent's list having two elements: a vector of choices (`y`) and a matrix of covariates (`X`). Notice the dimensions: there is one value for each choice occasion in each individual's `y` vector but one row per alternative in each individual's `X` matrix, making `nrow(x)` = 5 $\times$ `length(y)` because there are 5 alternatives per choice occasion.
r-cran-bayesm-3.1-4+dfsg/inst/doc/Constrained_MNL_Vignette.Rmd-170-