=========================================================== .___ __ __ _________________ __ __ __| _/|__|/ |_ / ___\_` __ \__ \ | | \/ __ | | \\_ __\ / /_/ > | \// __ \| | / /_/ | | || | \___ /|__| (____ /____/\____ | |__||__| /_____/ \/ \/ grep rough audit - static analysis tool v2.8 written by @Wireghoul =================================[justanotherhacker.com]=== r-cran-markovchain-0.8.5-2/vignettes/an_introduction_to_markovchain_package.Rmd-135- r-cran-markovchain-0.8.5-2/vignettes/an_introduction_to_markovchain_package.Rmd:136:The probability distribution of transitions from one state to another can be represented into a transition matrix $P=(p_{ij})_{i,j}$, where each element of position $(i,j)$ represents the transition probability $p_{ij}$. E.g., if $r=3$ the transition matrix $P$ is shown in Equation \ref{eq:trPropEx} r-cran-markovchain-0.8.5-2/vignettes/an_introduction_to_markovchain_package.Rmd-137- ############################################## r-cran-markovchain-0.8.5-2/vignettes/an_introduction_to_markovchain_package.Rmd-161- r-cran-markovchain-0.8.5-2/vignettes/an_introduction_to_markovchain_package.Rmd:162:A state $s_{i}$ has period $k_{i}$ if any return to state $s_{i}$ must occur in multiplies of $k_{i}$ steps, that is $k_{i}=gcd\left\{ n:Pr\left(X_{n}=s_{i}\left|X_{0}=s_{i}\right.\right)>0\right\}$, where $gcd$ is the greatest common divisor. If $k_{i}=1$ the state $s_{i}$ is said to be aperiodic, else if $k_{i}>1$ the state $s_{i}$ is periodic with period $k_{i}$. Loosely speaking, $s_{i}$ is periodic if it can only return to itself after a fixed number of transitions $k_{i}>1$ (or multiple of $k_{i}$), else it is aperiodic. r-cran-markovchain-0.8.5-2/vignettes/an_introduction_to_markovchain_package.Rmd-163- ############################################## r-cran-markovchain-0.8.5-2/vignettes/an_introduction_to_markovchain_package.Rmd-196- r-cran-markovchain-0.8.5-2/vignettes/an_introduction_to_markovchain_package.Rmd:197:Given a time homogeneous Markov chain with transition matrix \emph{P}, a stationary distribution \emph{z} is a stochastic row vector such that $z=z\cdot P$, where $0\leq z_{j}\leq 1 \: \forall j$ and $\sum_{j}z_{j}=1$. r-cran-markovchain-0.8.5-2/vignettes/an_introduction_to_markovchain_package.Rmd-198- ############################################## r-cran-markovchain-0.8.5-2/vignettes/an_introduction_to_markovchain_package.Rmd-744- r-cran-markovchain-0.8.5-2/vignettes/an_introduction_to_markovchain_package.Rmd:745: 1. `classes`, an matrix whose $(i, j)$ entry is `true` if $s_i$ and $s_j$ are in the same communicating class. r-cran-markovchain-0.8.5-2/vignettes/an_introduction_to_markovchain_package.Rmd-746- 2. `closed`, a vector whose $i$ -th entry indicates whether the communicating class to which $i$ belongs is closed. ############################################## r-cran-markovchain-0.8.5-2/vignettes/an_introduction_to_markovchain_package.Rmd-1443- r-cran-markovchain-0.8.5-2/vignettes/an_introduction_to_markovchain_package.Rmd:1444:Let the data set be $D = \{(s_0, t_0), (s_1, t_1), ..., (s_{N-1}, t_{N-1})\}$ where $N=|D|$. Each $s_i$ is a state from the state space $S$ and during the time $[t_i,t_{i+1}]$ the chain is in state $s_i$. Let the parameters be represented by $\theta = \{\lambda, P\}$ where $\lambda$ is the vector of holding parameters for each state and $P$ the transition matrix of the embedded discrete time Markov chain. r-cran-markovchain-0.8.5-2/vignettes/an_introduction_to_markovchain_package.Rmd-1445- ############################################## r-cran-markovchain-0.8.5-2/vignettes/an_introduction_to_markovchain_package.Rmd-1499- r-cran-markovchain-0.8.5-2/vignettes/an_introduction_to_markovchain_package.Rmd:1500:$n$ represents the number of samples to generate. There is an optional argument $T$ for `rctmc`. It represents the time of termination of the simulation. To use this feature, set $n$ to a very high value, say `Inf` (since we do not know the number of transitions before hand) and set $T$ accordingly. r-cran-markovchain-0.8.5-2/vignettes/an_introduction_to_markovchain_package.Rmd-1501- ############################################## r-cran-markovchain-0.8.5-2/vignettes/an_introduction_to_markovchain_package.Rmd-1647-\[\theta = \{p(s|u), s \in \mathcal{A}, u \in \mathcal{A} \}\] r-cran-markovchain-0.8.5-2/vignettes/an_introduction_to_markovchain_package.Rmd:1648:where $\sum_{s \in \mathcal{A}} p(s|u) = 1$ for each $u \in \mathcal{A}$. r-cran-markovchain-0.8.5-2/vignettes/an_introduction_to_markovchain_package.Rmd-1649- ############################################## r-cran-markovchain-0.8.5-2/vignettes/gsoc_2017_additions.Rmd-289-$$ r-cran-markovchain-0.8.5-2/vignettes/gsoc_2017_additions.Rmd:290:where $L = P -I$. r-cran-markovchain-0.8.5-2/vignettes/gsoc_2017_additions.Rmd-291- ############################################## r-cran-markovchain-0.8.5-2/inst/doc/an_introduction_to_markovchain_package.Rmd-135- r-cran-markovchain-0.8.5-2/inst/doc/an_introduction_to_markovchain_package.Rmd:136:The probability distribution of transitions from one state to another can be represented into a transition matrix $P=(p_{ij})_{i,j}$, where each element of position $(i,j)$ represents the transition probability $p_{ij}$. E.g., if $r=3$ the transition matrix $P$ is shown in Equation \ref{eq:trPropEx} r-cran-markovchain-0.8.5-2/inst/doc/an_introduction_to_markovchain_package.Rmd-137- ############################################## r-cran-markovchain-0.8.5-2/inst/doc/an_introduction_to_markovchain_package.Rmd-161- r-cran-markovchain-0.8.5-2/inst/doc/an_introduction_to_markovchain_package.Rmd:162:A state $s_{i}$ has period $k_{i}$ if any return to state $s_{i}$ must occur in multiplies of $k_{i}$ steps, that is $k_{i}=gcd\left\{ n:Pr\left(X_{n}=s_{i}\left|X_{0}=s_{i}\right.\right)>0\right\}$, where $gcd$ is the greatest common divisor. If $k_{i}=1$ the state $s_{i}$ is said to be aperiodic, else if $k_{i}>1$ the state $s_{i}$ is periodic with period $k_{i}$. Loosely speaking, $s_{i}$ is periodic if it can only return to itself after a fixed number of transitions $k_{i}>1$ (or multiple of $k_{i}$), else it is aperiodic. r-cran-markovchain-0.8.5-2/inst/doc/an_introduction_to_markovchain_package.Rmd-163- ############################################## r-cran-markovchain-0.8.5-2/inst/doc/an_introduction_to_markovchain_package.Rmd-196- r-cran-markovchain-0.8.5-2/inst/doc/an_introduction_to_markovchain_package.Rmd:197:Given a time homogeneous Markov chain with transition matrix \emph{P}, a stationary distribution \emph{z} is a stochastic row vector such that $z=z\cdot P$, where $0\leq z_{j}\leq 1 \: \forall j$ and $\sum_{j}z_{j}=1$. r-cran-markovchain-0.8.5-2/inst/doc/an_introduction_to_markovchain_package.Rmd-198- ############################################## r-cran-markovchain-0.8.5-2/inst/doc/an_introduction_to_markovchain_package.Rmd-744- r-cran-markovchain-0.8.5-2/inst/doc/an_introduction_to_markovchain_package.Rmd:745: 1. `classes`, an matrix whose $(i, j)$ entry is `true` if $s_i$ and $s_j$ are in the same communicating class. r-cran-markovchain-0.8.5-2/inst/doc/an_introduction_to_markovchain_package.Rmd-746- 2. `closed`, a vector whose $i$ -th entry indicates whether the communicating class to which $i$ belongs is closed. ############################################## r-cran-markovchain-0.8.5-2/inst/doc/an_introduction_to_markovchain_package.Rmd-1443- r-cran-markovchain-0.8.5-2/inst/doc/an_introduction_to_markovchain_package.Rmd:1444:Let the data set be $D = \{(s_0, t_0), (s_1, t_1), ..., (s_{N-1}, t_{N-1})\}$ where $N=|D|$. Each $s_i$ is a state from the state space $S$ and during the time $[t_i,t_{i+1}]$ the chain is in state $s_i$. Let the parameters be represented by $\theta = \{\lambda, P\}$ where $\lambda$ is the vector of holding parameters for each state and $P$ the transition matrix of the embedded discrete time Markov chain. r-cran-markovchain-0.8.5-2/inst/doc/an_introduction_to_markovchain_package.Rmd-1445- ############################################## r-cran-markovchain-0.8.5-2/inst/doc/an_introduction_to_markovchain_package.Rmd-1499- r-cran-markovchain-0.8.5-2/inst/doc/an_introduction_to_markovchain_package.Rmd:1500:$n$ represents the number of samples to generate. There is an optional argument $T$ for `rctmc`. It represents the time of termination of the simulation. To use this feature, set $n$ to a very high value, say `Inf` (since we do not know the number of transitions before hand) and set $T$ accordingly. r-cran-markovchain-0.8.5-2/inst/doc/an_introduction_to_markovchain_package.Rmd-1501- ############################################## r-cran-markovchain-0.8.5-2/inst/doc/an_introduction_to_markovchain_package.Rmd-1647-\[\theta = \{p(s|u), s \in \mathcal{A}, u \in \mathcal{A} \}\] r-cran-markovchain-0.8.5-2/inst/doc/an_introduction_to_markovchain_package.Rmd:1648:where $\sum_{s \in \mathcal{A}} p(s|u) = 1$ for each $u \in \mathcal{A}$. r-cran-markovchain-0.8.5-2/inst/doc/an_introduction_to_markovchain_package.Rmd-1649- ############################################## r-cran-markovchain-0.8.5-2/inst/doc/gsoc_2017_additions.Rmd-289-$$ r-cran-markovchain-0.8.5-2/inst/doc/gsoc_2017_additions.Rmd:290:where $L = P -I$. r-cran-markovchain-0.8.5-2/inst/doc/gsoc_2017_additions.Rmd-291- ############################################## r-cran-markovchain-0.8.5-2/debian/tests/run-unit-test-6-if [ "$AUTOPKGTEST_TMP" = "" ] ; then r-cran-markovchain-0.8.5-2/debian/tests/run-unit-test:7: AUTOPKGTEST_TMP=`mktemp -d /tmp/${debname}-test.XXXXXX` r-cran-markovchain-0.8.5-2/debian/tests/run-unit-test-8- trap "rm -rf $AUTOPKGTEST_TMP" 0 INT QUIT ABRT PIPE TERM