===========================================================
                                      .___ __  __   
          _________________  __ __  __| _/|__|/  |_ 
         / ___\_` __ \__  \ |  |  \/ __ | | \\_  __\
        / /_/  >  | \// __ \|  |  / /_/ | |  ||  |  
        \___  /|__|  (____  /____/\____ | |__||__|  
       /_____/            \/           \/           
              grep rough audit - static analysis tool
                  v2.8 written by @Wireghoul
=================================[justanotherhacker.com]===
r-cran-gmm-1.6-5/vignettes/gmm_with_R.rnw-80-\end{equation}
r-cran-gmm-1.6-5/vignettes/gmm_with_R.rnw:81:where $x_i$ is a vector of cross-sectional data, time series or both. In order for GMM to produce consistent estimates from the above conditions, $\theta_0$ has to be the unique solution to $E[g(\theta,x_i)]=0$ and be an element of a compact space. Some boundary assumptions on higher moments of $g(\theta,x_i)$ are also required. However, it does not impose any condition on the distribution of $x_i$, except for the degree of dependence of the observations when it is a vector of time series. 
r-cran-gmm-1.6-5/vignettes/gmm_with_R.rnw-82-
##############################################
r-cran-gmm-1.6-5/vignettes/gmm_with_R.rnw-94-\]
r-cran-gmm-1.6-5/vignettes/gmm_with_R.rnw:95:where $l_i(\beta)$ is the density of $u_i$. In presence of endogeneity of the explanatory variable $X$, which implies that $E(X_iu_i)\neq 0$, the IV method is often used. It solves the endogeneity problem by substituting $X$ by a matrix of instruments $H$, which is required to be correlated with $X$ and uncorrelated with $u$. These properties allow the model to be estimated by the conditional moment condition $E(u_i|H_i)=0$ or its implied unconditional moment condition $E(u_iH_i)=0$. In general we say that $u_i$ is orthogonal to an information set $I_i$ or that $E(u_i|I_i)=0$ in which case $H_i$ is a vector containing functions of any element of $I_i$. The model can therefore be estimated by solving
r-cran-gmm-1.6-5/vignettes/gmm_with_R.rnw-96-\[
##############################################
r-cran-gmm-1.6-5/vignettes/gmm_with_R.rnw-434-\]
r-cran-gmm-1.6-5/vignettes/gmm_with_R.rnw:435:where $u_t = 0.6\epsilon_{t-1} -0.3 \epsilon_{t-2} + \epsilon_t$ and $\epsilon_t\sim iidN(0,1)$.  This model can be estimated by GMM using any  $X_{t-s}$ for $s>2$,  because they are uncorrelated with $u_t$ and correlated with $X_{t-1}$ and $X_{t-2}$. However, as $s$ increases the quality of the instruments decreases since the stationarity of the process implies that the auto-correlation goes to zero. For this example, the selected instruments are $(X_{t-3},X_{t-4},X_{t-5},X_{t-6})$ and the sample size equals 400. The ARMA(2,2) process is generated by the function  \code{arima.sim}:
r-cran-gmm-1.6-5/vignettes/gmm_with_R.rnw-436-<<>>=
##############################################
r-cran-gmm-1.6-5/vignettes/gmm_with_R.rnw-488-\] 
r-cran-gmm-1.6-5/vignettes/gmm_with_R.rnw:489:where $R_t$ is a $N\times 1$ vector of observed returns on stocks, $R_{mt}$ if the observed return of a proxy for the market portfolio, $R_f$ is the interest rate on short term government bonds and $\epsilon_t$ is a vector of error terms with covariance matrix $\Sigma_t$. When estimated by ML or LS, $\Sigma$ is assumed to be fixed. However, GMM allows $\epsilon_t$ to be heteroskedastic and serially correlated. One implication of the CAPM is that the vector $\alpha$ should be zero. It can be tested by estimating the model with $(R_{mt}-R_f)$ as instruments, and by testing the null hypothesis $H_0:~\alpha=0$.
r-cran-gmm-1.6-5/vignettes/gmm_with_R.rnw-490-
##############################################
r-cran-gmm-1.6-5/vignettes/gmm_with_R.rnw-539-\]
r-cran-gmm-1.6-5/vignettes/gmm_with_R.rnw:540:where $W_t$ is a standard Brownian motion. Special cases of this process are the Brownian motion with drift ($\beta=0$ and $\gamma = 0$), the Ornstein-Uhlenbeck process ($\gamma=0$) and the Cox-Ingersoll-Ross or square root process ( $\gamma = 1/2$). It can be estimated using the following discrete time approximation:
r-cran-gmm-1.6-5/vignettes/gmm_with_R.rnw-541-\[
##############################################
r-cran-gmm-1.6-5/vignettes/gmm_with_R.rnw-623-\]
r-cran-gmm-1.6-5/vignettes/gmm_with_R.rnw:624:where $\hat{G}=d\bar{g}(\hat{\theta})/d\theta$ and $\hat{V}$ is obtained using \code{kernHAC()}. It is not a sandwich covariance matrix and is computed using the \code{vcov()} method included in \pkg{gmm}. However, if any other weighting matrix is used, say $W$, the estimated covariance matrix of the coefficients must then be estimated as follows: 
r-cran-gmm-1.6-5/vignettes/gmm_with_R.rnw-625-\[
##############################################
r-cran-gmm-1.6-5/vignettes/gmm_with_R.rnw-658-\]
r-cran-gmm-1.6-5/vignettes/gmm_with_R.rnw:659:where $p_i$ is called the implied probability associated with the observation $x_i$. For the GEL method, it is assumed that $q>p$ because otherwise it would correspond to GMM. Therefore, as it is the case for GMM, there is no solution to $\bar{g}(\theta)=0$. However, there is a solution to $\tilde{g}(\theta)=0$ for some choice of the probabilities $p_i$ such that $\sum_i p_i=1$. In fact, there is an infinite number of solutions since there are $(n+q)$ unknowns and only $q+1$ equations. GEL selects among them the one for which the distance between the vector of probabilities $p$ and the empirical density $1/n$ is minimized. The empirical likelihood of \cite{owen01} is a special case in which the distance is the likelihood ratio. The other methods that belong to the GEL family of estimators use different metrics. If the moment conditions hold, the implied probabilities carry a lot of information about the stochastic properties of $x_i$. For GEL, the estimations of the expected value of the Jacobian and the covariance matrix of the moment conditions, which are required to estimate $\theta$, are based on $p_i$ while in GMM they are estimated using $1/n$. \cite{newey-smith04} show that this difference explains partially why the second order properties of GEL are better. 
r-cran-gmm-1.6-5/vignettes/gmm_with_R.rnw-660-
##############################################
r-cran-gmm-1.6-5/vignettes/gmm_with_R.rnw-683-\end{equation}
r-cran-gmm-1.6-5/vignettes/gmm_with_R.rnw:684:where $\lambda$ is the Lagrange multiplier associated with the constraint (\ref{const}) and $\rho(v)$  is a strictly concave function normalized so that $\rho'(0)=\rho''(0)=-1$. It can be shown that $\rho(v)=\ln{(1-v)}$ corresponds to EL , $\rho(v)=-\exp{(v)}$ to ET and to CUE if it is quadratic. 
r-cran-gmm-1.6-5/vignettes/gmm_with_R.rnw-685-
##############################################
r-cran-gmm-1.6-5/inst/doc/gmm_with_R.rnw-80-\end{equation}
r-cran-gmm-1.6-5/inst/doc/gmm_with_R.rnw:81:where $x_i$ is a vector of cross-sectional data, time series or both. In order for GMM to produce consistent estimates from the above conditions, $\theta_0$ has to be the unique solution to $E[g(\theta,x_i)]=0$ and be an element of a compact space. Some boundary assumptions on higher moments of $g(\theta,x_i)$ are also required. However, it does not impose any condition on the distribution of $x_i$, except for the degree of dependence of the observations when it is a vector of time series. 
r-cran-gmm-1.6-5/inst/doc/gmm_with_R.rnw-82-
##############################################
r-cran-gmm-1.6-5/inst/doc/gmm_with_R.rnw-94-\]
r-cran-gmm-1.6-5/inst/doc/gmm_with_R.rnw:95:where $l_i(\beta)$ is the density of $u_i$. In presence of endogeneity of the explanatory variable $X$, which implies that $E(X_iu_i)\neq 0$, the IV method is often used. It solves the endogeneity problem by substituting $X$ by a matrix of instruments $H$, which is required to be correlated with $X$ and uncorrelated with $u$. These properties allow the model to be estimated by the conditional moment condition $E(u_i|H_i)=0$ or its implied unconditional moment condition $E(u_iH_i)=0$. In general we say that $u_i$ is orthogonal to an information set $I_i$ or that $E(u_i|I_i)=0$ in which case $H_i$ is a vector containing functions of any element of $I_i$. The model can therefore be estimated by solving
r-cran-gmm-1.6-5/inst/doc/gmm_with_R.rnw-96-\[
##############################################
r-cran-gmm-1.6-5/inst/doc/gmm_with_R.rnw-434-\]
r-cran-gmm-1.6-5/inst/doc/gmm_with_R.rnw:435:where $u_t = 0.6\epsilon_{t-1} -0.3 \epsilon_{t-2} + \epsilon_t$ and $\epsilon_t\sim iidN(0,1)$.  This model can be estimated by GMM using any  $X_{t-s}$ for $s>2$,  because they are uncorrelated with $u_t$ and correlated with $X_{t-1}$ and $X_{t-2}$. However, as $s$ increases the quality of the instruments decreases since the stationarity of the process implies that the auto-correlation goes to zero. For this example, the selected instruments are $(X_{t-3},X_{t-4},X_{t-5},X_{t-6})$ and the sample size equals 400. The ARMA(2,2) process is generated by the function  \code{arima.sim}:
r-cran-gmm-1.6-5/inst/doc/gmm_with_R.rnw-436-<<>>=
##############################################
r-cran-gmm-1.6-5/inst/doc/gmm_with_R.rnw-488-\] 
r-cran-gmm-1.6-5/inst/doc/gmm_with_R.rnw:489:where $R_t$ is a $N\times 1$ vector of observed returns on stocks, $R_{mt}$ if the observed return of a proxy for the market portfolio, $R_f$ is the interest rate on short term government bonds and $\epsilon_t$ is a vector of error terms with covariance matrix $\Sigma_t$. When estimated by ML or LS, $\Sigma$ is assumed to be fixed. However, GMM allows $\epsilon_t$ to be heteroskedastic and serially correlated. One implication of the CAPM is that the vector $\alpha$ should be zero. It can be tested by estimating the model with $(R_{mt}-R_f)$ as instruments, and by testing the null hypothesis $H_0:~\alpha=0$.
r-cran-gmm-1.6-5/inst/doc/gmm_with_R.rnw-490-
##############################################
r-cran-gmm-1.6-5/inst/doc/gmm_with_R.rnw-539-\]
r-cran-gmm-1.6-5/inst/doc/gmm_with_R.rnw:540:where $W_t$ is a standard Brownian motion. Special cases of this process are the Brownian motion with drift ($\beta=0$ and $\gamma = 0$), the Ornstein-Uhlenbeck process ($\gamma=0$) and the Cox-Ingersoll-Ross or square root process ( $\gamma = 1/2$). It can be estimated using the following discrete time approximation:
r-cran-gmm-1.6-5/inst/doc/gmm_with_R.rnw-541-\[
##############################################
r-cran-gmm-1.6-5/inst/doc/gmm_with_R.rnw-623-\]
r-cran-gmm-1.6-5/inst/doc/gmm_with_R.rnw:624:where $\hat{G}=d\bar{g}(\hat{\theta})/d\theta$ and $\hat{V}$ is obtained using \code{kernHAC()}. It is not a sandwich covariance matrix and is computed using the \code{vcov()} method included in \pkg{gmm}. However, if any other weighting matrix is used, say $W$, the estimated covariance matrix of the coefficients must then be estimated as follows: 
r-cran-gmm-1.6-5/inst/doc/gmm_with_R.rnw-625-\[
##############################################
r-cran-gmm-1.6-5/inst/doc/gmm_with_R.rnw-658-\]
r-cran-gmm-1.6-5/inst/doc/gmm_with_R.rnw:659:where $p_i$ is called the implied probability associated with the observation $x_i$. For the GEL method, it is assumed that $q>p$ because otherwise it would correspond to GMM. Therefore, as it is the case for GMM, there is no solution to $\bar{g}(\theta)=0$. However, there is a solution to $\tilde{g}(\theta)=0$ for some choice of the probabilities $p_i$ such that $\sum_i p_i=1$. In fact, there is an infinite number of solutions since there are $(n+q)$ unknowns and only $q+1$ equations. GEL selects among them the one for which the distance between the vector of probabilities $p$ and the empirical density $1/n$ is minimized. The empirical likelihood of \cite{owen01} is a special case in which the distance is the likelihood ratio. The other methods that belong to the GEL family of estimators use different metrics. If the moment conditions hold, the implied probabilities carry a lot of information about the stochastic properties of $x_i$. For GEL, the estimations of the expected value of the Jacobian and the covariance matrix of the moment conditions, which are required to estimate $\theta$, are based on $p_i$ while in GMM they are estimated using $1/n$. \cite{newey-smith04} show that this difference explains partially why the second order properties of GEL are better. 
r-cran-gmm-1.6-5/inst/doc/gmm_with_R.rnw-660-
##############################################
r-cran-gmm-1.6-5/inst/doc/gmm_with_R.rnw-683-\end{equation}
r-cran-gmm-1.6-5/inst/doc/gmm_with_R.rnw:684:where $\lambda$ is the Lagrange multiplier associated with the constraint (\ref{const}) and $\rho(v)$  is a strictly concave function normalized so that $\rho'(0)=\rho''(0)=-1$. It can be shown that $\rho(v)=\ln{(1-v)}$ corresponds to EL , $\rho(v)=-\exp{(v)}$ to ET and to CUE if it is quadratic. 
r-cran-gmm-1.6-5/inst/doc/gmm_with_R.rnw-685-