===========================================================
                                      .___ __  __   
          _________________  __ __  __| _/|__|/  |_ 
         / ___\_` __ \__  \ |  |  \/ __ | | \\_  __\
        / /_/  >  | \// __ \|  |  / /_/ | |  ||  |  
        \___  /|__|  (____  /____/\____ | |__||__|  
       /_____/            \/           \/           
              grep rough audit - static analysis tool
                  v2.8 written by @Wireghoul
=================================[justanotherhacker.com]===
r-cran-gbm-2.1.8/vignettes/gbm.Rnw-79-\end{equation}
r-cran-gbm-2.1.8/vignettes/gbm.Rnw:80:where $\rho$ is the size of the step along the direction of greatest descent. Clearly, this step alone is far from our desired goal. First, it only fits $f$ at values of $\mathbf{x}$ for which we have observations.  Second, it does not take into account that observations with similar $\mathbf{x}$ are likely to have similar values of $f(\mathbf{x})$. Both these problems would have disastrous effects on generalization error. However, Friedman suggests selecting a class of functions that use the covariate information to approximate the gradient, usually a regression tree. This line of reasoning produces his Gradient Boosting algorithm shown in Figure~\ref{fig:GradientBoost}. At each iteration the algorithm determines the direction, the gradient, in which it needs to improve the fit to the data and selects a particular model from the allowable class of functions that is in most agreement with the direction. In the case of squared-error loss, $\Psi(y_i,f(\mathbf{x}_i)) = \sum_{i=1}^N (y_i-f(\mathbf{x}_i))^2$, this algorithm corresponds exactly to residual fitting.
r-cran-gbm-2.1.8/vignettes/gbm.Rnw-81-
##############################################
r-cran-gbm-2.1.8/vignettes/gbm.Rnw-212-
r-cran-gbm-2.1.8/vignettes/gbm.Rnw:213:Notes: \begin{itemize} \item For non-zero offset terms, the computation of the initial value requires Newton-Raphson. Initialize $f_0=0$ and iterate $\displaystyle f_0 \leftarrow f_0 + \frac{\sum w_i(y_i-p_i)}{\sum w_ip_i(1-p_i)}$ where $\displaystyle p_i = \frac{1}{1+\exp(-(o_i+f_0))}$. \end{itemize}
r-cran-gbm-2.1.8/vignettes/gbm.Rnw-214-
##############################################
r-cran-gbm-2.1.8/R/gbm.more.R-154-   {
r-cran-gbm-2.1.8/R/gbm.more.R:155:      m <- eval(object$m, parent.frame())
r-cran-gbm-2.1.8/R/gbm.more.R-156-
##############################################
r-cran-gbm-2.1.8/inst/doc/gbm.Rnw-79-\end{equation}
r-cran-gbm-2.1.8/inst/doc/gbm.Rnw:80:where $\rho$ is the size of the step along the direction of greatest descent. Clearly, this step alone is far from our desired goal. First, it only fits $f$ at values of $\mathbf{x}$ for which we have observations.  Second, it does not take into account that observations with similar $\mathbf{x}$ are likely to have similar values of $f(\mathbf{x})$. Both these problems would have disastrous effects on generalization error. However, Friedman suggests selecting a class of functions that use the covariate information to approximate the gradient, usually a regression tree. This line of reasoning produces his Gradient Boosting algorithm shown in Figure~\ref{fig:GradientBoost}. At each iteration the algorithm determines the direction, the gradient, in which it needs to improve the fit to the data and selects a particular model from the allowable class of functions that is in most agreement with the direction. In the case of squared-error loss, $\Psi(y_i,f(\mathbf{x}_i)) = \sum_{i=1}^N (y_i-f(\mathbf{x}_i))^2$, this algorithm corresponds exactly to residual fitting.
r-cran-gbm-2.1.8/inst/doc/gbm.Rnw-81-
##############################################
r-cran-gbm-2.1.8/inst/doc/gbm.Rnw-212-
r-cran-gbm-2.1.8/inst/doc/gbm.Rnw:213:Notes: \begin{itemize} \item For non-zero offset terms, the computation of the initial value requires Newton-Raphson. Initialize $f_0=0$ and iterate $\displaystyle f_0 \leftarrow f_0 + \frac{\sum w_i(y_i-p_i)}{\sum w_ip_i(1-p_i)}$ where $\displaystyle p_i = \frac{1}{1+\exp(-(o_i+f_0))}$. \end{itemize}
r-cran-gbm-2.1.8/inst/doc/gbm.Rnw-214-