===========================================================
                                      .___ __  __   
          _________________  __ __  __| _/|__|/  |_ 
         / ___\_` __ \__  \ |  |  \/ __ | | \\_  __\
        / /_/  >  | \// __ \|  |  / /_/ | |  ||  |  
        \___  /|__|  (____  /____/\____ | |__||__|  
       /_____/            \/           \/           
              grep rough audit - static analysis tool
                  v2.8 written by @Wireghoul
=================================[justanotherhacker.com]===
r-cran-psych-2.0.9/man/omega.graph.Rd-84-#24 mental tests from Holzinger-Swineford-Harman
r-cran-psych-2.0.9/man/omega.graph.Rd:85:if(require(GPArotation) ) {om24 <- omega(Harman74.cor$cov,4) } #run omega
r-cran-psych-2.0.9/man/omega.graph.Rd-86-
##############################################
r-cran-psych-2.0.9/man/diagram.Rd:1:\name{diagram}\Rdversion{1.1}\alias{diagram}\alias{dia.rect}\alias{dia.ellipse}\alias{dia.ellipse1}\alias{dia.arrow}\alias{dia.curve}\alias{dia.curved.arrow}\alias{dia.self}\alias{dia.shape}\alias{dia.triangle}\alias{dia.cone}\alias{multi.rect}\alias{multi.arrow}\alias{multi.curved.arrow}\alias{multi.self}\title{Helper functions for drawing path model diagrams}\description{Path models are used to describe structural equation models or cluster analytic output.  These functions provide the primitives for drawing path models.  Used as a substitute for some of the functionality of Rgraphviz.}\usage{diagram(fit,...)dia.rect(x, y = NULL, labels = NULL,  cex = 1,  xlim = c(0, 1), ylim = c(0, 1),    draw=TRUE, ...)dia.ellipse(x, y = NULL, labels = NULL, cex=1,e.size=.05, xlim=c(0,1),        ylim=c(0,1),draw=TRUE,  ...) dia.triangle(x, y = NULL, labels =NULL,  cex = 1, xlim=c(0,1),ylim=c(0,1),...)dia.ellipse1(x,y,e.size=.05,xlim=c(0,1),ylim=c(0,1),draw=TRUE,...)dia.shape(x, y = NULL, labels = NULL, cex = 1,          e.size=.05, xlim=c(0,1), ylim=c(0,1), shape=1, ...)dia.arrow(from,to,labels=NULL,scale=1,cex=1,adj=2,both=FALSE,pos=NULL,l.cex,        gap.size,draw=TRUE,col="black",lty="solid",...)dia.curve(from,to,labels=NULL,scale=1,...)dia.curved.arrow(from,to,labels=NULL,scale=1,both=FALSE,dir=NULL,draw=TRUE,...)dia.self(location,labels=NULL,scale=.8,side=2,draw=TRUE,...)dia.cone(x=0, y=-2, theta=45, arrow=TRUE,curves=TRUE,add=FALSE,labels=NULL,      xlim = c(-1, 1), ylim=c(-1,1),... ) multi.self(self.list,...)multi.arrow(arrows.list,...)multi.curved.arrow(curved.list,...)multi.rect(rect.list,...)}\arguments{ \item{fit}{The results from a factor analysis \code{\link{fa}}, components analysis  \code{\link{principal}}, omega reliability analysis, \code{\link{omega}}, cluster analysis  \code{\link{iclust}}, topdown (bassAckward) \code{\link{bassAckward}} or confirmatory factor analysis, cfa, or structural equation model,sem, using the lavaan package.}  \item{x}{x coordinate of a rectangle or ellipse}  \item{y}{y coordinate of a rectangle or ellipse}  \item{e.size}{The size of the ellipse (scaled by the number of variables}  \item{labels}{Text to insert in rectangle, ellipse, or arrow}  \item{cex}{adjust the text size}  \item{col}{line color  (normal meaning for plot figures)}  \item{lty}{line type}  \item{l.cex}{Adjust the text size in arrows, defaults to         cex which in turn defaults to 1}  \item{gap.size}{Tweak the gap in an arrow to be allow the label to be in a gap}  \item{adj}{Where to put the label along the arrows (values are then divided by 4)}  \item{both}{Should the arrows have arrow heads on both ends?}\item{scale}{modifies size of rectangle and ellipse as well as the curvature of curves.  (For curvature, positive numbers are concave down and to the left}  \item{from}{arrows and curves go from }  \item{to}{arrows and curves go to}  \item{location}{where is the rectangle?}  \item{shape}{Which shape to draw}  \item{xlim}{default ranges}  \item{ylim}{default ranges}  \item{draw}{Draw the text box}  \item{side}{Which side of boxes should errors appear}  \item{theta}{Angle in degrees of vectors}  \item{arrow}{draw arrows for edges in dia.cone}  \item{add}{if TRUE, plot on previous plot}  \item{curves}{if TRUE, draw curves between arrows in dia.cone}  \item{pos}{The position of the text in . Follows the text positions of 1, 2, 3, 4 or NULL}  \item{dir}{Should the direction of the curve be calculated dynamically, or set as "up" or "left"}  \item{\dots}{Most graphic parameters may be passed here}  \item{self.list}{list saved from dia.self}  \item{arrows.list}{lst saved from dia.arrow}  \item{curved.list}{list saved from dia.curved.arrow}  \item{rect.list}{list saved from dia.rect}      }\details{The diagram function calls  \code{\link{fa.diagram}}, \code{\link{omega.diagram}},  \code{\link{ICLUST.diagram}}, \code{\link{lavaan.diagram}} or \code{\link{bassAckward}}.diagram depending upon the class of the fit input.  See those functions for particular parameter values.The remaining functions are the graphic primitives used by \code{\link{fa.diagram}}, \code{\link{structure.diagram}}, \code{\link{omega.diagram}}, \code{\link{ICLUST.diagram}} and \code{\link{het.diagram}}They create rectangles, ellipses or triangles surrounding text, connect them to straight or curved arrows, and can draw an arrow from and to the same rectangle. To speed up the plotting, dia.rect and dia.arrow can suppress the actual drawing and return the locations and values to plot.  These values  can then be directly called by text or rect with matrix input. This leads to an impressive increase in speed when doing many variables. The functions \code{\link{multi.rect}}, \code{\link{multi.self}}, \code{\link{multi.arrow}} and \code{\link{multi.curved.arrow}}  will take the saved output from the appropriate primitives and then draw them all at once. Each shape (ellipse, rectangle or triangle) has a left, right, top and bottom and center coordinate that may be used to connect the arrows. Curves are double-headed arrows.   By default they go from one location to another and curve either left or right (if going up or down) or up or down (going left to right).  The direction of the curve may be set by dir="up" for left right curvature.The helper functions were developed to get around the infelicities associated with trying to install Rgraphviz and graphviz. These functions form the core of \code{\link{fa.diagram}},\code{\link{het.diagram}}. Better documentation will be added as these functions get improved.  Currently the helper functions are just a work around for Rgraphviz.dia.cone draws a cone with (optionally) arrows as sides and centers to show the problem of factor indeterminacy. }\value{Graphic output}\author{William Revelle}\seealso{The diagram functions that use the dia functions:  \code{\link{fa.diagram}}, \code{\link{structure.diagram}}, \code{\link{omega.diagram}}, and \code{\link{ICLUST.diagram}}.}\examples{#first, show the primitivesxlim=c(-2,10)ylim=c(0,10)plot(NA,xlim=xlim,ylim=ylim,main="Demonstration of diagram functions",axes=FALSE,xlab="",ylab="")ul <- dia.rect(1,9,labels="upper left",xlim=xlim,ylim=ylim)ml <- dia.rect(1,6,"middle left",xlim=xlim,ylim=ylim)ll <- dia.rect(1,3,labels="lower left",xlim=xlim,ylim=ylim)bl <- dia.rect(1,1,"bottom left",xlim=xlim,ylim=ylim)lr <- dia.ellipse(7,3,"lower right",xlim=xlim,ylim=ylim,e.size=.07)ur <- dia.ellipse(7,9,"upper right",xlim=xlim,ylim=ylim,e.size=.07)mr <- dia.ellipse(7,6,"middle right",xlim=xlim,ylim=ylim,e.size=.07)lm <- dia.triangle(4,1,"Lower Middle",xlim=xlim,ylim=ylim)br <- dia.rect(9,1,"bottom right",xlim=xlim,ylim=ylim) dia.curve(from=ul$left,to=bl$left,"double headed",scale=-1)dia.arrow(from=lr,to=ul,labels="right to left")dia.arrow(from=ul,to=ur,labels="left to right")dia.curved.arrow(from=lr,to=ll,labels ="right to left")dia.curved.arrow(to=ur,from=ul,labels ="left to right")dia.curve(ll$top,ul$bottom,"right")  #for rectangles, specify where to point dia.curve(ll$top,ul$bottom,"left",scale=-1)  #for rectangles, specify where to point dia.curve(mr,ur,"up")  #but for ellipses, you may just point to it.dia.curve(mr,lr,"down")dia.curve(mr,ur,"up")dia.curved.arrow(mr,ur,"up")  #but for ellipses, you may just point to it.dia.curved.arrow(mr,lr,"down")  #but for ellipses, you may just point to it.dia.curved.arrow(ur$right,mr$right,"3")dia.curve(ml,mr,"across")dia.curve(ur$right,lr$right,"top down",scale =2)dia.curved.arrow(br$top,lr$right,"up")dia.curved.arrow(bl,br,"left to right")dia.curved.arrow(br,bl,"right to left",scale=-1)dia.arrow(bl,ll$bottom)dia.curved.arrow(ml,ll$right)dia.curved.arrow(mr,lr$top)#now, put them together in a factor analysis diagramv9 <- sim.hierarchical()f3 <- fa(v9,3,rotate="cluster")fa.diagram(f3,error=TRUE,side=3) }% Add one or more standard keywords, see file 'KEYWORDS' in the% R documentation directory.\keyword{multivariate}\keyword{hplot }% __ONLY ONE__ keyword per line
##############################################
r-cran-psych-2.0.9/vignettes/intro.Rnw-1115-
r-cran-psych-2.0.9/vignettes/intro.Rnw:1116:An important generalization of multiple regression and multiple correlation is \iemph{set correlation} developed by \cite{cohen:set} and discussed by \cite{cohen:03}.     Set correlation is a multivariate generalization of multiple regression and estimates the amount of variance shared between two sets of variables.  Set correlation also allows for examining the relationship between two sets when controlling for a third set.  This is implemented in the \pfun{setCor} function.  Set correlation is $$R^{2} = 1 - \prod_{i=1}^n(1-\lambda_{i})$$ where $\lambda_{i}$ is the ith eigen value of the eigen value decomposition of the matrix $$R = R_{xx}^{-1}R_{xy}R_{xx}^{-1}R_{xy}^{-1}.$$  Unfortunately, there are several cases where set correlation will give results that are much too high.  This will happen if some variables from the first set are highly related to those in the second set, even though most are not.  In this case, although the set correlation can be very high, the degree of relationship between the sets is not as high.  In this case, an alternative statistic, based upon the average canonical correlation might be more appropriate.  
r-cran-psych-2.0.9/vignettes/intro.Rnw-1117-
##############################################
r-cran-psych-2.0.9/R/ICLUST.cluster.R:1:ICLUST.cluster <- function (r.mat,ICLUST.options,smc.items) {#should allow for raw data, correlation or covariances#options:  alpha =1  (minimum alpha)  2 (average alpha)    3 (maximum alpha)#          beta =1   (minimum beta)   2 (average beta)     3 (maximum beta)#          correct  for reliability#          reverse score items if negative correlations#          stop clustering if beta for new clusters < beta.min#          output =1 (short)    2  (show steps)     3 show rejects as we go#          #initialize  various arrays and get ready for the first passoutput <- ICLUST.options$outputnum.var <- nrow(r.mat)keep.clustering <- TRUE          #used to determine when we are finished clusteringresults <- data.frame(matrix(rep(0,18*(num.var-1)),ncol=18))   #create the data frame for the results#results <-  matrix(rep(0,18*(num.var-1)),ncol=18)   #use a matrix for speed but we can not address it by namenames(results) <- c("Item/Cluster", "Item/Clust","similarity","correlation","alpha1","alpha2","beta1","beta2","size1","size2","rbar1","rbar2","r1","r2","alpha","beta","rbar","size")rownames(results) <- paste("C",1:(num.var-1),sep="")digits <- ICLUST.options$digitsclusters <- diag(1,nrow =nrow(r.mat))    #original cluster structure is 1 item clustersif(is.null(rownames(r.mat))) {rownames(r.mat) <-  paste("V",1:num.var,sep="") }rownames(clusters) <- rownames(r.mat)colnames(clusters) <- paste("V",1:num.var,sep="")diag(r.mat) <- 0row.range <- apply(r.mat,1,range,na.rm=TRUE)     item.max<- pmax(abs(row.range[1,]),abs(row.range[2,]))  #find the largest absolute similaritydiag(r.mat) <- 1count=1#master loop while (keep.clustering) {   #loop until we figure out we should stop#find similiarities #we will do most of the work on a copy of the r.mat#cluster.stats <- cluster.cor(clusters,r.mat,FALSE,SMC=ICLUST.options$SMC)   #deleted 30/12/13cluster.stats <- cluster.cor(clusters,r.mat,FALSE,SMC=ICLUST.options$SMC,item.smc=smc.items)sim.mat <- cluster.stats$cor    #the correlation matrixdiag(sim.mat) <- 0   #we don't want 1's on the diagonal to mess up the maximum #two ways to estimate reliability -- for 1 item clusters, max correlation, for >1, alpha#this use of initial max should be an optionif (ICLUST.options$correct) {   #find the largest and smallest similarities for each variable      	row.range <- apply(sim.mat,1,range,na.rm=TRUE)     	row.max <- pmax(abs(row.range[1,]),abs(row.range[2,]))  #find the largest absolute similarity   } else {row.max <- rep(1, nrow(sim.mat)) }         #don't correct for largest similarity   item.rel <- cluster.stats$alpha  for (i in 1: length(item.rel)) { if (cluster.stats$size[i]<2) {	item.rel[i] <- row.max[i]	#figure out item betas here?	}}		if(output>3)  print(sim.mat,digits=digits) #this is the corrected for maximum r similaritiesif (ICLUST.options$correct) {  sq.max <- diag(1/sqrt(item.rel))     #used to correct for reliabilities  sim <- sq.max %*% sim.mat %*% sq.max   #this corrects for reliabilities but messes up the correlations of two item clusters with items     } else {sim <- sim.mat}     diag(sim) <- NA                  #we need to not consider the diagonal when looking for maxima#find the most similar pair    and apply tests if we should combinetest.alpha <- FALSEtest.beta <- FALSEwhile(!(test.alpha&test.beta)){max.cell <- which.max(sim)             #global maximumif (length(max.cell) < 1) {   keep.clustering <- FALSE   break}   #there are no non-NA values leftsign.max <- 1if ( ICLUST.options$reverse ) {            #normal case is to reflect if necessary	min.cell <- which.min(sim)             #location of global minimum	if (sim[max.cell] <  abs(sim[min.cell] )) {  	  	sign.max <- -1		max.cell <- min.cell }	if (sim[max.cell] < 0.0) {sign.max <- -1 }}  	               #this is a weird case where all the similarities are negative  -- happens towards the end of clustering	max.col <- trunc(max.cell/nrow(sim))+1    #is in which row and column?max.row <- max.cell - (max.col-1)*nrow(sim) #need to fix the case of first columnif (max.row < 1) {max.row <- nrow(sim) max.col <- max.col-1 }size1 <- cluster.stats$size[max.row]if(size1 < 2) {V1 <- 1	beta1 <-  item.rel[max.row]	alpha1 <- item.rel[max.row]	rbar1 <-  item.rel[max.row]	    	} else {	rbar1 <- results[cluster.names[max.row],"rbar"]	beta1 <- results[cluster.names[max.row],"beta"]	alpha1 <- results[cluster.names[max.row],"alpha"]	V1 <- size1 + size1*(size1-1) * rbar1	}	 size2 <- cluster.stats$size[max.col]if(size2 < 2) {V2 <- 1 	beta2 <-  item.rel[max.col] 	alpha2 <- item.rel[max.col] 	rbar2 <-  item.rel[max.col]			} else {		rbar2 <- results[cluster.names[max.col],"rbar"]	beta2 <- results[cluster.names[max.col],"beta"]	alpha2 <- results[cluster.names[max.col],"alpha"] 	V2 <- size2 + size2 * (size2-1) * rbar2}Cov12 <- sign.max * sim.mat[max.cell] * sqrt(V1*V2) #this flips the sign of the correlation for negative correlationsr12 <- Cov12/(size1*size2)   #average between cluster rV12 <- V1 + V2 + 2 * Cov12  #the variance of the new clustersize12 <- size1 + size2V12c <- (V12 - size12)*(size12/(size12-1))  #true variance (using the average r on the diagonal)rbar <- V12c/(size12^2)alpha <- V12c/V12#combine these two rows if the various criterion are passed#beta.weighted <- size12^2 * sign.max *r12/V12   #this was added June, 2009 but can produce negative betasbeta.weighted <- size12^2 *r12/V12               #corrected July 28, 2009  beta.unweighted <-  2* sign.max*sim.mat[max.cell]/(1+sign.max* sim.mat[max.cell])if(ICLUST.options$weighted) {beta.combined <- beta.weighted} else {beta.combined <- beta.unweighted}  #what is the correlation of this new cluster with the two subclusters?#this considers item overlap problems#There are actually two alternative solutions#a) (cor.gen=TRUE)   finds the correlation due to a shared general factor #b) (cor.gen=FALSE)  finds the correlation for the general + group but remove the item overlap problem#neither seems optimal, in that a) will correctly identify non-correlated clusters, but b) is less affected by small clusters.if(ICLUST.options$cor.gen) {c1 <-  r12 * size1 * size1 +  Cov12    #corrected covariance  c2 <-  sign.max*(r12 * size2 * size2 +   Cov12) } else {c1 <-  size1^2* rbar1 +   Cov12  c2 <-  sign.max*(size2^2 *rbar2 +   Cov12) }if((size1 < 2) && (size2  < 2)) { #r2 should be flipped if necessary -- r2 is always flipped (if necessary) when forming clusters        r1 <- sqrt(abs(rbar1))   #this  corrects for reliability in a two item cluster       r2 <- sign.max* r1       #flips the sign if two are negatively correlated  -- in the case of two items                              } else {    #this next part corrects for item overlap as well as reliability of the subcluster                                                                         if (ICLUST.options$correct.cluster) {   #correct is the default option                                 if(TRUE)  {r1 <- c1/sqrt((V1 - size1  +size1 * rbar1) * V12) 												       if (size2 < 2)  {												       r2 <- c2/sqrt(abs(rbar2)*V12)}  else  {												      # r2 <- sign.max * c2/sqrt((V2-size2 + size2 * rbar2)*V12c)}    #changed yet again on 6/10/10												       r2 <-  c2/sqrt((V2-size2 + size2 * rbar2)*V12c)}												    } else {												      if(size1 < 2 )   {												     												      r1 <-  c1/sqrt(abs(rbar1)*V12)} else  { 												       												      r1 <- c1/sqrt((V1-size1 + size1 * rbar1)*V12c)  }                                       #flip the smaller of the two clusters  -- no, flip r2                                                         if (size2 < 2) {r2 <- c2/sqrt(abs(rbar2)*V12)} else { r2 <-  c2/sqrt((V2-size2 + size2*rbar2)*V12c)}												          #  r2 <-  c2/sqrt((V2-size2+size2*rbar2)*V12c) 												          }							         							} else {if(TRUE) {r1 <- c1/sqrt(V1*V12)   #do not correct												   r2 <- sign.max* c2/sqrt(V2*V12)												    } else {												        r1 <-sign.max* c1/sqrt(V1*V12)  }                                      #flip the smaller of the two clusters  - flip r2												            r2 <-  c2/sqrt(V2*V12) }													}#test if we should combine these two clusters  #first, does alpha increase?test.alpha <- TRUEif (ICLUST.options$alpha>0) { #should we apply the alpha test?if (ICLUST.options$alpha.size < min(size1,size2)) {  switch(ICLUST.options$alpha, {if (alpha < min(alpha1,alpha2)) {if (output>2) {print(  paste ('do not combine ', cluster.names[max.row],"with", cluster.names[max.col],  'new alpha =', alpha,'old alpha1 =', alpha1,"old alpha2 =",alpha2))}												test.alpha <- FALSE }},  {if (alpha < mean(alpha1,alpha2)) {if (output>2) {print(paste ('do not combine ',  cluster.names[max.row],"with", cluster.names[max.col],'new alpha =',alpha,  'old alpha1 =',alpha1,"old alpha2 =",alpha2))}												test.alpha <- FALSE  }},  {if (alpha < max(alpha1,alpha2)) {if (output>2) {print(paste ('do not combine ',  cluster.names[max.row],"with", cluster.names[max.col],'new alpha =', alpha,  'old alpha1 =',alpha1,"old alpha2 =",alpha2))}												test.alpha <- FALSE  }}) #end switch    }   #end if options$alpha.size  }#second, does beta increase ?test.beta <- TRUE          if (ICLUST.options$beta>0) { #should we apply the beta test?if (ICLUST.options$beta.size < min(size1,size2)) {  switch(ICLUST.options$beta, {if (beta.combined < min(beta1,beta2)) {if (output>2) {print(  paste ('do not combine ', cluster.names[max.row],"with", cluster.names[max.col],'new beta =',  round (beta.combined,digits),'old beta1 =',round( beta1,digits),"old beta2 =",round(beta2,digits)))}												test.beta <- FALSE }},  {if (beta.combined < mean(beta1,beta2)) {if (output>2) {print(paste ('do not combine ',   cluster.names[max.row],"with", cluster.names[max.col],'new beta =', round (beta.combined,digits),  'old beta1 =',round( beta1,digits),"old beta2 =",round(beta2,digits)))}												test.beta <- FALSE  }},  {if (beta.combined < max(beta1,beta2)) {if (output>2) {print(paste ('do not combine ',  cluster.names[max.row],"with", cluster.names[max.col],'new beta =', round (beta.combined,digits),  'old beta1 =',round( beta1,digits),"old beta2 =",round(beta2,digits)))}												test.beta <- FALSE  }}) #end switch  												  }   #end if options$beta.size}if(test.beta & test.alpha) { break} else { #we have failed the combining criteria             if((ICLUST.options$n.clus > 0) & ((num.var - count ) >= ICLUST.options$n.clus) ) {warning ("Clusters formed as requested do not meet the alpha and beta criteria. Perhaps you should rethink the number of cluster settings.")			                break } else {   			if (beta.combined < ICLUST.options$beta.min) {						keep.clustering <- FALSE       #the most similiar pair is not very similar, we should quit			break}  else { 						sim[max.row,max.col] <- NA 		sim[max.col,max.row] <- NA }}    }   #end of test.beta & test.alpha    }    #end of while test.alpha & test.beta.loop#combine and summarizeif (keep.clustering)    {          # we have passed the alpha and beta tests, now combine these two variablesclusters[,max.row] <- clusters[,max.row] + sign.max *  clusters[,max.col]  cluster.names <- colnames(clusters)#summarize the resultsresults[count,1] <- cluster.names[max.row]results[count,2] <- cluster.names[max.col]results[count,"similarity"] <- sim[max.cell]results[count,"correlation"] <- sim.mat[max.cell]results[count,"alpha1"] <- item.rel[max.row]results[count,"alpha2"] <- item.rel[max.col]size1 <- cluster.stats$size[max.row]size2 <- cluster.stats$size[max.col]results[count,"size1"] <- size1results[count,"size2"] <- size2results[count,"beta1"] <-  beta1results[count,"beta2"] <-  beta2results[count,"rbar1"] <-  rbar1results[count,"rbar2"] <-  rbar2results[count,"r1"] <- r1results[count,"r2"] <- r2results[count,"beta"] <- beta.combinedresults[count,'alpha'] <- alpharesults[count,'rbar'] <- rbarresults[count,"size"] <- size12#updatecluster.names[max.row] <- paste("C",count,sep="")colnames(clusters) <- cluster.namesclusters <- clusters[,-max.col]cluster.names <- colnames(clusters)#row.max <- row.max[-max.col]    	}  #end of combine sectionif(output > 1) print(results[count,],digits=digits)count=count+1 if ((num.var - count) < ICLUST.options$n.clus) {keep.clustering <- FALSE}if(num.var - count < 1) {keep.clustering <- FALSE}   #only one cluster left	}  #end of keep clustering loop#make clusters in the direction of the majority of the items#direct <- -(colSums(clusters) < 0 ) #clusters <- t(diag(direct) %*% t(clusters))#colnames(clusters) <- cluster.namesICLUST.cluster <- list(results=results,clusters=clusters,number <- num.var - count)}   # end ICLUST.cluster#modified June 12, 2008 to  calculate the item-cluster correlation for cluster of size 2#modified June 14, 2009 to find weighted or unweighted beta#unweighted had been the default option before but it would seem that weighted makes more sense#modified June 5, 2010 to correct the graphic tree paths
##############################################
r-cran-psych-2.0.9/inst/doc/intro.Rnw-1115-
r-cran-psych-2.0.9/inst/doc/intro.Rnw:1116:An important generalization of multiple regression and multiple correlation is \iemph{set correlation} developed by \cite{cohen:set} and discussed by \cite{cohen:03}.     Set correlation is a multivariate generalization of multiple regression and estimates the amount of variance shared between two sets of variables.  Set correlation also allows for examining the relationship between two sets when controlling for a third set.  This is implemented in the \pfun{setCor} function.  Set correlation is $$R^{2} = 1 - \prod_{i=1}^n(1-\lambda_{i})$$ where $\lambda_{i}$ is the ith eigen value of the eigen value decomposition of the matrix $$R = R_{xx}^{-1}R_{xy}R_{xx}^{-1}R_{xy}^{-1}.$$  Unfortunately, there are several cases where set correlation will give results that are much too high.  This will happen if some variables from the first set are highly related to those in the second set, even though most are not.  In this case, although the set correlation can be very high, the degree of relationship between the sets is not as high.  In this case, an alternative statistic, based upon the average canonical correlation might be more appropriate.  
r-cran-psych-2.0.9/inst/doc/intro.Rnw-1117-