===========================================================
                                      .___ __  __   
          _________________  __ __  __| _/|__|/  |_ 
         / ___\_` __ \__  \ |  |  \/ __ | | \\_  __\
        / /_/  >  | \// __ \|  |  / /_/ | |  ||  |  
        \___  /|__|  (____  /____/\____ | |__||__|  
       /_____/            \/           \/           
              grep rough audit - static analysis tool
                  v2.8 written by @Wireghoul
=================================[justanotherhacker.com]===
r-cran-glmnet-4.0-2/vignettes/Coxnet.Rmd-85-
r-cran-glmnet-4.0-2/vignettes/Coxnet.Rmd:86:`coef(fit, s = cv.fit\$lambda.min)` returns the $p$ length coefficient
r-cran-glmnet-4.0-2/vignettes/Coxnet.Rmd:87:vector of the solution corresponding to $\lambda =$`cv.fit$lambda.min`.
r-cran-glmnet-4.0-2/vignettes/Coxnet.Rmd-88-
##############################################
r-cran-glmnet-4.0-2/vignettes/glmnet.Rmd-139-```
r-cran-glmnet-4.0-2/vignettes/glmnet.Rmd:140:It shows from left to right the number of nonzero coefficients (`Df`), the percent (of null) deviance explained (`%dev`) and the value of $\lambda$ (`Lambda`). Although by default `glmnet` calls for 100 values of `lambda` the program stops early if `%dev% does not change sufficently from one lambda to the next (typically near the end of the path.)
r-cran-glmnet-4.0-2/vignettes/glmnet.Rmd-141-
##############################################
r-cran-glmnet-4.0-2/vignettes/glmnet.Rmd-172-```
r-cran-glmnet-4.0-2/vignettes/glmnet.Rmd:173:`lambda.min` is the value of $\lambda$ that gives minimum mean cross-validated error. The other $\lambda$ saved is  `lambda.1se`, which gives the most regularized model such that error is within one standard error of the minimum. To use that, we only need to replace `lambda.min` with `lambda.1se` above.
r-cran-glmnet-4.0-2/vignettes/glmnet.Rmd-174-```{r}
##############################################
r-cran-glmnet-4.0-2/vignettes/glmnet.Rmd-196-$$
r-cran-glmnet-4.0-2/vignettes/glmnet.Rmd:197:where $\lambda \geq 0$ is a complexity parameter and $0 \leq \alpha \leq 1$ is a compromise between ridge ($\alpha = 0$) and lasso ($\alpha = 1$).
r-cran-glmnet-4.0-2/vignettes/glmnet.Rmd-198-
##############################################
r-cran-glmnet-4.0-2/vignettes/glmnet.Rmd-202-$$
r-cran-glmnet-4.0-2/vignettes/glmnet.Rmd:203:where $\tilde{y}_i^{(j)} = \tilde{\beta}_0 + \sum_{\ell \neq j} x_{i\ell} \tilde{\beta}_\ell$, and $S(z, \gamma)$ is the soft-thresholding operator with value $\text{sign}(z)(|z|-\gamma)_+$.
r-cran-glmnet-4.0-2/vignettes/glmnet.Rmd-204-
##############################################
r-cran-glmnet-4.0-2/vignettes/glmnet.Rmd-214-
r-cran-glmnet-4.0-2/vignettes/glmnet.Rmd:215:* `lambda` can be provided, but is typically not and the program constructs a sequence. When automatically generated, the $\lambda$ sequence is determined by `lambda.max` and `lambda.min.ratio`. The latter is the ratio of smallest value of the generated  $\lambda$ sequence (say `lambda.min`) to `lambda.max`.  The program then generated `nlambda` values linear on the log scale from `lambda.max` down to `lambda.min`. `lambda.max` is not given, but easily computed from the input $x$ and $y$; it is the smallest value for `lambda` such that all the coefficients are zero.  For `alpha=0` (ridge) `lambda.max` would be $\infty$; hence for this case we pick a value corresponding to a small value for `alpha` close to zero.)
r-cran-glmnet-4.0-2/vignettes/glmnet.Rmd-216-
##############################################
r-cran-glmnet-4.0-2/vignettes/glmnet.Rmd-255-
r-cran-glmnet-4.0-2/vignettes/glmnet.Rmd:256:* `exact` indicates whether the exact values of coefficients are desired or not. That is, if `exact = TRUE`, and predictions are to be made at values of `s` not included in the original fit, these values of `s` are merged with `object$lambda`, and the model is refit before predictions are made. If `exact=FALSE` (default), then the predict function uses linear interpolation to make predictions for values of `s` that do not coincide with lambdas used in the fitting algorithm.
r-cran-glmnet-4.0-2/vignettes/glmnet.Rmd-257-
##############################################
r-cran-glmnet-4.0-2/vignettes/glmnet.Rmd-447-
r-cran-glmnet-4.0-2/vignettes/glmnet.Rmd:448:Note that we set `type.coef = "2norm"`. Under this setting, a single curve is plotted per variable, with value equal to the $\ell_2$ norm. The default setting is `type.coef = "coef"`, where a coefficient plot is created for each response (multiple figures).
r-cran-glmnet-4.0-2/vignettes/glmnet.Rmd-449-
##############################################
r-cran-glmnet-4.0-2/vignettes/glmnet.Rmd-611-
r-cran-glmnet-4.0-2/vignettes/glmnet.Rmd:612:`xvar` and `label` are the same as other families while `type.coef` is only for multinomial regression and multiresponse Gaussian model. It can produce a figure of coefficients for each response variable if `type.coef = "coef"` or a figure showing the $\ell_2$-norm in one figure if `type.coef = "2norm"`
r-cran-glmnet-4.0-2/vignettes/glmnet.Rmd-613-
##############################################
r-cran-glmnet-4.0-2/vignettes/glmnet.Rmd-628-
r-cran-glmnet-4.0-2/vignettes/glmnet.Rmd:629:Poisson regression is used to model count data under the assumption of Poisson error, or otherwise non-negative data where the mean and variance are proportional. Like the Gaussian and binomial model, the Poisson is a member of the exponential family of distributions. We usually model its positive mean on the log scale:   $\log \mu(x) = \beta_0+\beta' x$.
r-cran-glmnet-4.0-2/vignettes/glmnet.Rmd-630-The log-likelihood for observations $\{x_i,y_i\}_1^N$ is given my
##############################################
r-cran-glmnet-4.0-2/vignettes/glmnet.Rmd-895-## Appendix 2: Comparison with Other Packages
r-cran-glmnet-4.0-2/vignettes/glmnet.Rmd:896:Some people may want to use `glmnet` to solve the Lasso or elastic-net problem at a single $\lambda$. We compare here the solution by `glmnet` with other packages (such as CVX), and also as an illustration of parameter settings in this situation.
r-cran-glmnet-4.0-2/vignettes/glmnet.Rmd-897-
##############################################
r-cran-glmnet-4.0-2/vignettes/glmnet.Rmd-905-
r-cran-glmnet-4.0-2/vignettes/glmnet.Rmd:906:We first solve using `glmnet`. Notice that there is no intercept term in the objective function, and the columns of $X$ are not necessarily standardized. Corresponding parameters have to be set to make it work correctly. In addition, there is a $1/(2n)$ factor before the quadratic term by default, we need to adjust $\lambda$ accordingly. For the purpose of comparison, the `thresh` option is specified to be 1e-20. However, this is not necessary in many practical applications.
r-cran-glmnet-4.0-2/vignettes/glmnet.Rmd-907-```{r, echo=FALSE}
##############################################
r-cran-glmnet-4.0-2/vignettes/glmnet.Rmd-919-
r-cran-glmnet-4.0-2/vignettes/glmnet.Rmd:920:Alternatively, a more stable and __strongly recommended__ way to perform this task is to first fit the entire Lasso or elastic-net path without specifying `lambda`, but then provide the requested $\lambda_0$ to `predict` function to extract the corresponding coefficients. In fact, if $\lambda_0$ is not in the $\lambda$ sequence generated by `glmnet`, the path will be refitted along a new $\lambda$ sequence that includes the requested value $\lambda_0$ and the old sequence, and the coefficients will be returned at $\lambda_0$ based on the new fit. Remember to set `exact = TRUE` in `predict` function to get the exact solution. Otherwise, it will be approximated by linear interpolation.
r-cran-glmnet-4.0-2/vignettes/glmnet.Rmd-921-
##############################################
r-cran-glmnet-4.0-2/R/cv.glmnetfit.R-8-    etastart=0;mustart=NULL;start=NULL
r-cran-glmnet-4.0-2/R/cv.glmnetfit.R:9:    eval(family$initialize)
r-cran-glmnet-4.0-2/R/cv.glmnetfit.R-10-    ##
##############################################
r-cran-glmnet-4.0-2/R/plot.relaxed.R-34-    rdev[which]=round(x$relaxed$dev.ratio*100, 2)
r-cran-glmnet-4.0-2/R/plot.relaxed.R:35:    out=data.frame(Df = x$df, `%Dev` = round(x$dev.ratio*100, 2), `%Dev R`=rdev,
r-cran-glmnet-4.0-2/R/plot.relaxed.R-36-                    Lambda = signif(x$lambda, digits),check.names=FALSE,row.names=seq(along=rdev))
##############################################
r-cran-glmnet-4.0-2/R/glmnetFlex.R-145-    etastart=0;mustart=NULL;start=NULL
r-cran-glmnet-4.0-2/R/glmnetFlex.R:146:    eval(family$initialize)
r-cran-glmnet-4.0-2/R/glmnetFlex.R-147-    ##
##############################################
r-cran-glmnet-4.0-2/inst/doc/Coxnet.Rmd-85-
r-cran-glmnet-4.0-2/inst/doc/Coxnet.Rmd:86:`coef(fit, s = cv.fit\$lambda.min)` returns the $p$ length coefficient
r-cran-glmnet-4.0-2/inst/doc/Coxnet.Rmd:87:vector of the solution corresponding to $\lambda =$`cv.fit$lambda.min`.
r-cran-glmnet-4.0-2/inst/doc/Coxnet.Rmd-88-
##############################################
r-cran-glmnet-4.0-2/inst/doc/glmnet.Rmd-139-```
r-cran-glmnet-4.0-2/inst/doc/glmnet.Rmd:140:It shows from left to right the number of nonzero coefficients (`Df`), the percent (of null) deviance explained (`%dev`) and the value of $\lambda$ (`Lambda`). Although by default `glmnet` calls for 100 values of `lambda` the program stops early if `%dev% does not change sufficently from one lambda to the next (typically near the end of the path.)
r-cran-glmnet-4.0-2/inst/doc/glmnet.Rmd-141-
##############################################
r-cran-glmnet-4.0-2/inst/doc/glmnet.Rmd-172-```
r-cran-glmnet-4.0-2/inst/doc/glmnet.Rmd:173:`lambda.min` is the value of $\lambda$ that gives minimum mean cross-validated error. The other $\lambda$ saved is  `lambda.1se`, which gives the most regularized model such that error is within one standard error of the minimum. To use that, we only need to replace `lambda.min` with `lambda.1se` above.
r-cran-glmnet-4.0-2/inst/doc/glmnet.Rmd-174-```{r}
##############################################
r-cran-glmnet-4.0-2/inst/doc/glmnet.Rmd-196-$$
r-cran-glmnet-4.0-2/inst/doc/glmnet.Rmd:197:where $\lambda \geq 0$ is a complexity parameter and $0 \leq \alpha \leq 1$ is a compromise between ridge ($\alpha = 0$) and lasso ($\alpha = 1$).
r-cran-glmnet-4.0-2/inst/doc/glmnet.Rmd-198-
##############################################
r-cran-glmnet-4.0-2/inst/doc/glmnet.Rmd-202-$$
r-cran-glmnet-4.0-2/inst/doc/glmnet.Rmd:203:where $\tilde{y}_i^{(j)} = \tilde{\beta}_0 + \sum_{\ell \neq j} x_{i\ell} \tilde{\beta}_\ell$, and $S(z, \gamma)$ is the soft-thresholding operator with value $\text{sign}(z)(|z|-\gamma)_+$.
r-cran-glmnet-4.0-2/inst/doc/glmnet.Rmd-204-
##############################################
r-cran-glmnet-4.0-2/inst/doc/glmnet.Rmd-214-
r-cran-glmnet-4.0-2/inst/doc/glmnet.Rmd:215:* `lambda` can be provided, but is typically not and the program constructs a sequence. When automatically generated, the $\lambda$ sequence is determined by `lambda.max` and `lambda.min.ratio`. The latter is the ratio of smallest value of the generated  $\lambda$ sequence (say `lambda.min`) to `lambda.max`.  The program then generated `nlambda` values linear on the log scale from `lambda.max` down to `lambda.min`. `lambda.max` is not given, but easily computed from the input $x$ and $y$; it is the smallest value for `lambda` such that all the coefficients are zero.  For `alpha=0` (ridge) `lambda.max` would be $\infty$; hence for this case we pick a value corresponding to a small value for `alpha` close to zero.)
r-cran-glmnet-4.0-2/inst/doc/glmnet.Rmd-216-
##############################################
r-cran-glmnet-4.0-2/inst/doc/glmnet.Rmd-255-
r-cran-glmnet-4.0-2/inst/doc/glmnet.Rmd:256:* `exact` indicates whether the exact values of coefficients are desired or not. That is, if `exact = TRUE`, and predictions are to be made at values of `s` not included in the original fit, these values of `s` are merged with `object$lambda`, and the model is refit before predictions are made. If `exact=FALSE` (default), then the predict function uses linear interpolation to make predictions for values of `s` that do not coincide with lambdas used in the fitting algorithm.
r-cran-glmnet-4.0-2/inst/doc/glmnet.Rmd-257-
##############################################
r-cran-glmnet-4.0-2/inst/doc/glmnet.Rmd-447-
r-cran-glmnet-4.0-2/inst/doc/glmnet.Rmd:448:Note that we set `type.coef = "2norm"`. Under this setting, a single curve is plotted per variable, with value equal to the $\ell_2$ norm. The default setting is `type.coef = "coef"`, where a coefficient plot is created for each response (multiple figures).
r-cran-glmnet-4.0-2/inst/doc/glmnet.Rmd-449-
##############################################
r-cran-glmnet-4.0-2/inst/doc/glmnet.Rmd-611-
r-cran-glmnet-4.0-2/inst/doc/glmnet.Rmd:612:`xvar` and `label` are the same as other families while `type.coef` is only for multinomial regression and multiresponse Gaussian model. It can produce a figure of coefficients for each response variable if `type.coef = "coef"` or a figure showing the $\ell_2$-norm in one figure if `type.coef = "2norm"`
r-cran-glmnet-4.0-2/inst/doc/glmnet.Rmd-613-
##############################################
r-cran-glmnet-4.0-2/inst/doc/glmnet.Rmd-628-
r-cran-glmnet-4.0-2/inst/doc/glmnet.Rmd:629:Poisson regression is used to model count data under the assumption of Poisson error, or otherwise non-negative data where the mean and variance are proportional. Like the Gaussian and binomial model, the Poisson is a member of the exponential family of distributions. We usually model its positive mean on the log scale:   $\log \mu(x) = \beta_0+\beta' x$.
r-cran-glmnet-4.0-2/inst/doc/glmnet.Rmd-630-The log-likelihood for observations $\{x_i,y_i\}_1^N$ is given my
##############################################
r-cran-glmnet-4.0-2/inst/doc/glmnet.Rmd-895-## Appendix 2: Comparison with Other Packages
r-cran-glmnet-4.0-2/inst/doc/glmnet.Rmd:896:Some people may want to use `glmnet` to solve the Lasso or elastic-net problem at a single $\lambda$. We compare here the solution by `glmnet` with other packages (such as CVX), and also as an illustration of parameter settings in this situation.
r-cran-glmnet-4.0-2/inst/doc/glmnet.Rmd-897-
##############################################
r-cran-glmnet-4.0-2/inst/doc/glmnet.Rmd-905-
r-cran-glmnet-4.0-2/inst/doc/glmnet.Rmd:906:We first solve using `glmnet`. Notice that there is no intercept term in the objective function, and the columns of $X$ are not necessarily standardized. Corresponding parameters have to be set to make it work correctly. In addition, there is a $1/(2n)$ factor before the quadratic term by default, we need to adjust $\lambda$ accordingly. For the purpose of comparison, the `thresh` option is specified to be 1e-20. However, this is not necessary in many practical applications.
r-cran-glmnet-4.0-2/inst/doc/glmnet.Rmd-907-```{r, echo=FALSE}
##############################################
r-cran-glmnet-4.0-2/inst/doc/glmnet.Rmd-919-
r-cran-glmnet-4.0-2/inst/doc/glmnet.Rmd:920:Alternatively, a more stable and __strongly recommended__ way to perform this task is to first fit the entire Lasso or elastic-net path without specifying `lambda`, but then provide the requested $\lambda_0$ to `predict` function to extract the corresponding coefficients. In fact, if $\lambda_0$ is not in the $\lambda$ sequence generated by `glmnet`, the path will be refitted along a new $\lambda$ sequence that includes the requested value $\lambda_0$ and the old sequence, and the coefficients will be returned at $\lambda_0$ based on the new fit. Remember to set `exact = TRUE` in `predict` function to get the exact solution. Otherwise, it will be approximated by linear interpolation.
r-cran-glmnet-4.0-2/inst/doc/glmnet.Rmd-921-