Thứ Hai, ngày 12 tháng 7 năm 2010

Factor Analysis

Factor Analysis

Principal components factor analysis

Use of extracted factors in multivariate dependency models





KEY CONCEPTS
*****
Factor Analysis

Interdependency technique
Assumptions of factor analysis
Latent variable (i.e. factor)
Research questions answered by factor analysis
Applications of factor analysis
Exploratory applications
Confirmatory applications
R factor analysis
Q factor analysis
Factor loadings
Steps in factor analysis
Initial v final solution
Factorability of an intercorrelation matrix
Bartlett's test of sphericity and its interpretation
Kaiser-Meyer-Olkin measure of sampling adequacy (KMO) and its interpretation
Identity matrix and the determinant of an identity matrix
Methods for extracting factors
Principal components
Maximum likelihood method
Principal axis method
Unwieghted least squares
Generalized least squares
Alpha method
Image factoring
Criteria for determining the number of factors
Eigenvalue greater than 1.0
Cattell's scree plot
Percent and cumulative percent of variance explained by the factors extracted
Component matrix and factor loadings
Communality of a variable
Determining what a factor measures and naming a factor
Factor rotation and its purpose
Varimax
Quartimax
Equimax
Orthogonal v oblique rotation
Reproduced correlation matrix
Computing factor scores
Factor score coefficient matrix
Using factor score in multivariate dependency models

Lecture Outline


 Identifying patterns of intercorrelation

 Factors v correlations

 Steps in the factor analysis process

 Testing for "factorability"

 Initial v final factor solutions

 Naming factors

 Factor rotation

 Computing factor scores

 Using factors scores in multivariate dependency models

Factor Analysis


Interdependency Technique

Seeks to find the latent factors that account for the patterns of collinearity among multiple metric variables


Assumptions

Large enough sample to yield reliable estimates of the correlations among the variables

Statistical inference is improved if the variables are multivariate normal

Relationships among the pairs of variables are linear

Absence of outliers among the cases

Some degree of collinearity among the variables but not an extreme degree or singularity among the variables

Large ratio of N / k

An intercorrelation Matrix


Basic assumption Variables that significantly correlate with each other do so because they are measuring the same "thing".

The problem What is the "thing" that correlated variables are measuring in common?

Given nine metric variable …


X1
X2
X3
X4
X5
X6
X7
X8
X9

X1
1.00
0.80
0.70
0.95
0.01
0.20
0.18
0.16
0.03
X2 1.00 0.63 0.75 0.08 0.11 0.13 0.04 0.09
X3 1.00 0.84 0.02 0.12 0.07 0.15 0.05
X4 1.00 0.01 0.11 0.06 0.02 0.13
X5 1.00 0.93 0.02 0.05 0.03
X6 1.00 0.11 0.09 0.02
X7 1.00 0.95 0.90
X8 1.00 0.93
X9 1.00



An intercorrelation Matrix (cont.)


Notice the patterns of intercorrelation

 Variables 1, 2, & 3 correlate highly with each other, but not with the rest of the variables

 Variables 5 & 6 correlate highly with each other, but not with the rest of the variables

 Variables 7, 8, & 9 correlate highly with each other, but not with the rest of the variables


Deduction The nine variables seem to be measuring 3 "things" or underlying factors.

Q What are these three factors?

Q To what extent does each variable measure each of these three factors?


Research Questions That
Lend Themselves to Factor Analysis


A 20 item attitudinal survey of citizen attitudes
about the problems of crime and the administration of justice.

Q Does the survey measure 20 different independent attitudinal dimensions or do the survey items only measure a few underlying attitudes?


A pre-sentence investigation

Q Are the individual items in a pre-sentence investigation measuring as many independent background factors, or do they measure a few underlying background dimensions; e.g. social, educational, criminal, etc.?

The purpose of factor analysis is to reduce multiple variables to a lesser number of underlying factors that are being measured by the variables.

Applications of Factor Analysis


Exploratory factor analysis

A non-theoretical application. Given a set of variables, what are the underlying dimensions (factors), if any, that account for the patterns of collinearity among the variables?

Example Given the multiple items of information gathered on applicants applying for admission to a police academy, how many independent factors are actually being measured by these items?


Confirmatory factor analysis

Given a theory with four concepts that purport to explain some behavior, do multiple measures of the behavior reduce to these four factors?

Example Given a theory that attributes delinquency to four independent factors, do multiple measures on delinquents reduce to measuring these four factors?

Applications of Factor Analysis (cont.)


R and Q Factor Analysis

R factor analysis involves extracting latent factors from among the variables

Q factor analysis involves factoring the subjects vis-à-vis the variables. The result is a "clustering" of the subjects into independent groups based upon factors extracted from the data.

This application is not used much today since a variety of clustering techniques have been developed that are designed specifically for the purpose of grouping multiple subjects into independent groups.


The Logic of Factor Analysis

Given an N by k database …


Subjects
Variables
X1 X2 X3 … Xk
1 2 12 0 … 113
2 5 16 2 … 116
3 7 8 1 … 214
… … … … … …
N 12 23 0 … 168

Compute a k x k intercorrelation matrix …

X1 X2 X3 … Xk
X1 1.00 0.26 0.84 … 0.72
X2 1.00 0.54 … 0.63
X3 1.00 … 0.47
… … …
Xk 1.00

Reduce the intercorrelation matrix to a k x F matrix of factor loadings …
+
Variables Factor I Factor II Factor III
X1 0.932 0.013 0.250
X2 0.851 0.426 0.211
X3 0.134 0.651 0.231
… … … 0.293
Xk 0.725 0.344 0.293


What is a Factor Loading?


A factor loading is the correlation between a variable and a factor that has been extracted from the data.

Example Note the factor loadings for variable X1.

Variables Factor I Factor II Factor III
X1 0.932 0.013 0.250

Interpretation

Variable X1 is highly correlated with Factor I, but negligibly correlated with Factors II and III

Q How much of the variance in variable X1 is measured or accounted for by the three factors that were extracted?

Simply square the factor loadings and add them together

(0.9322 + 0.0132 + 0.2502) = 0.93129

This is called the communality of the variable.

Steps in Factor Analysis


Step 1 Compute a k by k intercorrelation matrix. Compute the factorability of the matrix.


Step 2 Extract an initial solution


Step 3 From the initial solution, determine the appropriate number of factors to be extracted in the final solution


Step 4 If necessary, rotate the factors to clarify the factor pattern in order to better interpret the nature of the factors


Step 5 Depending upon subsequent applications, compute a factor score for each subject on each factor.


An Eleven Variable Example


The variables and their code names

• Sentence (sentence)

• Number of prior convictions(pr_conv)

• Intelligence (iq)

• Drug dependency (dr_score)

• Chronological age (age)

• Age at 1st arrest (age_firs)

• Time to case disposition (tm_disp)

• Pre-trial jail time (jail_tm)

• Time served on sentence (tm_serv)

• Educational equivalency (educ_eqv)

• Level of work skill (skl_indx)

Intercorrelation Among the Variables




Q How much collinearity or common variance exits among the variables?


Q Is the intercorrelation matrix "factorable"?

Ways to Determine the Factorability of an Intercorrelation Matrix

Two Tests

Bartlett's Test of Sphericity

Kaiser-Meyer-Olkin Measure of Sampling Adequacy (KMO)

Consider the intercorrelation matrix below, which is called an identity matrix.

X1 X2 X3 X4 X5
X1 1.00 0.00 0.00 0.00 0.00
X2 1.00 0.00 0.00 0.00
X3 1.00 0.00 0.00
X4 1.00 0.00
X5 1.00

The variables are totally noncollinear. If this matrix was factor analyzed …

It would extract as many factors as variables, since each variable would be its own factor.

It is totally non-factorable

Bartlett's Test of Sphericity


In matrix algebra, the determinate of an identity matrix is equal to 1.0. For example …


1.0 0.0
I =
0.0 1.0

1.0 0.0
I =
0.0 1.0


I = (1.0 x 1.0) - (0.0 x 0.0) = 1.0

Example Given the intercorrelation matrix below, what is its determinate?

1.0 0.63
R =
0.63 1.00

R = (1.0 x 1.0) - (0.63 x 0.63) = 0.6031

Bartlett's Test of Sphericity (cont.)


Bartlett's test of sphericity

Calculates the determinate of the matrix of the sums of products and cross-products (S) from which the intercorrelation matrix is derived.

The determinant of the matrix S is converted to a chi-square statistic and tested for significance.

The null hypothesis is that the intercorrelation matrix comes from a population in which the variables are noncollinear (i.e. an identity matrix)

And that the non-zero correlations in the sample matrix are due to sampling error.




Chi-square

2 = - [(n-1) - 1/6 (2p+1+2/p)] [ln S + pln(1/p) lj ]

p = number of variables, k = number of components, lj = jth eigenvalue of S

df = (p - 1) (p - 2) / 2

Results of Bartlett's Test of Sphericity





Test Results

2 = 496.536

df = 55

p  0.001


Statistical Decision

The sample intercorrelation matrix did not come from a population in which the intercorrelation matrix is an identity matrix.

Kaiser-Meyer-Olkin Measure of Sampling Adequacy (KMO)


If two variables share a common factor with other variables, their partial correlation (aij) will be small, indicating the unique variance they share.

aij = (rij •1, 2, 3, …k )


KMO = ( r2ij ) / ( r2ij + ( a2ij )


If aij  0.0

The variables are measuring a
common factor, and KMO  1.0


If aij  1.0

The variables are not measuring a
common factor, and KMO  0.0


Kaiser-Meyer-Olkin Measure of Sampling Adequacy (KMO) (cont.)


Interpretation of the KMO as characterized by Kaiser, Meyer, and Olkin …



KMO Value
Degree of Common Variance

0.90 to 1.00
Marvelous

0.80 to 0.89
Meritorious

0.70 to 0.79
Middling

0.60 to 0.69
Mediocre

0.50 to 0.59
Miserable

0.00 to 0.49
Don't Factor


Results of the KMO





The KMO = 0.698


Interpretation

The degree of common variance among the eleven variables is "mediocre" bordering on "middling"

If a factor analysis is conducted, the factors extracted will account for fare amount of variance but not a substantial amount.


Extracting an Initial Solution


A variety of methods have been developed to extract factors from an intercorrelation matrix. SPSS offers the following methods …

 Principle components method (probably the most commonly used method)

 Maximum likelihood method (a commonly used method)

 Principal axis method also know as common factor analysis

 Unweighted least-squares method

 Generalized least squares method

 Alpha method

 Image factoring


An Initial Solution Using the Principal Components Method


In the initial solution, each variable is standardized to have a mean of 0.0 and a standard deviation of 1.0. Thus …

The variance of each variable = 1.0

And the total variance to be explained is 11,
i.e. 11 variables, each with a variance = 1.0

Since a single variable can account for 1.0 unit of variance …

A useful factor must account for more than 1.0 unit of variance, or have an eigenvalue   1.0

Otherwise the factor extracted explains no more variance than a single variable.

Remember the goal of factor analysis is to explain multiple variables by a lesser number of factors.


The Results of the Initial Solution



11 factors (components) were extracted, the same as the number of variables factored.

Factor I

The 1st factor has an eigenvalue = 3.698. Since this is greater than 1.0, it explains more variance than a single variable, in fact 3.698 times as much.

The percent a variance explained …

(3.698 / 11 units of variance) (100) = 33.617%


The Results of the Initial Solution (cont.)

Factor II

The 2nd factor has an eigenvalue = 2.484. It is also greater than 1.0, and therefore explains more variance than a single variable

The percent a variance explained

(2.484 / 11 units of variance) (100) = 22.580%

Factor III

The 3rd factor has an eigenvalue = 1.237. Like Factors I & II it is greater than 1.0, and therefore explains more variance than a single variable.

The percent a variance explained

(1.237 / 11 units of variance) (100) = 11.242%

The remaining factors

Factors 4 through 11 have eigenvalues less that 1, and therefore explain less variance that a single variable.

The Results of the Initial Solution (cont.)


Nota Bene

 The sum of the eigenvalues associated with each factor (component) sums to 11.

(3.698 + 2.484 + 1.237 + 0.952 + … + 2.437 E-02) = 11


 The cumulative % of variance explained by the first three factors is 67.439%

In other words, 67.439% of the common variance shared by the 11 variables can be accounted for by the 3 factors.

This is reflective of the KMO of 0.698, a "mediocre" to "middling % of variance


 This initial solution suggests that the final solution should extract not more than 3 factors.

Cattell's Scree Plot

Another way to determine the number of factors to extract in the final solution is Cattell's scree plot. This is a plot of the eigenvalues associated with each of the factors extracted, against each factor.



At the point that the plot begins to level off, the additional factors explain less variance than a single variable.



Raymond B. Cattell (1952) Factor Analysis New York: Harper & Bros.

Factor Loadings


The component matrix indicates the correlation of each variable with each factor.





The variable sentence

Correlates 0.933 with Factor I

Correlates 0.104 with Factor II

Correlates -0.190 with Factor II

Factor Loadings (cont.)

The total proportion of the variance in sentence explained by the three factors is simply the sum of its squared factor loadings.

(0.9332 + 0.1042 - 0.1902) = 0.917

This is called the communality of the variable sentence

The communalities of the 11 variables are as follows: (cf. column headed Extraction)




As is evident from the table, the proportion of variance in each variable accounted for by the three factors is not the same.

What Do the Three Factors Measure?

The key to determining what the factors measure is the factor loadings

For example

Which variables load (correlate) highest on Factor I and low on the other two factors?



Factor I

sentence (.933) tm_serv (.907)
age (.853) jail_tm (.659)
age_firs (-.581) pr_conv (.548)
dr_score (.404)


What Do the Three Factors Measure? (cont.)

Naming Factor I What do these seven variables have in common, particularly the first few that have the highest loading on Factor I?

Degree of criminality? Career criminal history? You name it …

Factor II

educ_equ (.935) skl_indx (.887)
iq (.808)

Naming Factor II Educational level or job skill level?

Factor III

Tm_disp (.896)

Naming Factor III This factor is difficult to understand since only one variable loaded highest on it.

Could this be a measure of criminal case processing dynamics? We would need more variables loading on this factor to know.

Summary of Results

 The 11 variables were reduced to 3 factors

 These three factors account for 67.44% of the covariance among the variables

 Factors I appears to measure criminal history

 Factor II appears to measure educational/skill level

 Factor III is ambiguous at best

 Three of the variables that load highest on Factor I do not load too high; i.e. age_firs, pr_conv, and dr_score. Compare their loadings and communalities.


Loading on Factor I
Communality

age_firs -.581
.483
Pr_conv .548 .303
Dr_score .404 .252

The communality is the proportion of the variance in a variable accounted for by the three factors.

What Can Be Done if the Factor Pattern
Is Not Clear?


Sometimes one or more variables may load about the same on more than one factor, making the interpretation of the factors ambiguous.

Ideally, the analyst would like to find that each variable loads high ( 1.0) on one factor and approximately zero on all the others ( 0.0).


An Ideal Component Matrix

Variables Factors (f)
I II … f
X1 1.0 0.0 … 0.0
X2 1.0 0.0 … 0.0
X3 0.0 0.0 … 1.0
X4 0.0 1.0 … 0.0
… … … … …
Xk 0.0 0.0 … 1.0

The values in the matrix are factor loadings, the correlations between each variable and each factor.

Factor Rotation


Sometimes the factor pattern can be clarified by "rotating" the factors in F-dimensional space. Consider the following hypothetical two-factor solution involving eight variables.

F I



X8

X1 X4


X6
F II

X3 X5



X7

X2


Variables 1 & 2 load on Factor I, while variables 3, 5, & 6 load on Factor II. Variables 4, 7, & 8 load about the same on both factors.

What happens if the axes are rotated?


Factor Rotation (cont.)

Orthogonal rotation The axes remain 90 apart


F I



X8

X1 X4


X6
F II

X3 X5



X7

X2



Variables 1, 2, 5, 7, & 8 load on Factor I, while variables 3, 4, & 6 load on Factor II

Notice that relative to variables 4, 7, & 8, the rotated factor pattern is clearer than the previously unrotated pattern.

What Criterion is Used in Factor Rotation?

There are various methods that can be used in factor rotation …

Varimax Rotation

Attempts to achieve loadings of ones and zeros in the columns of the component matrix (1.0 & 0.0).

Quartimax Rotation

Attempts to achieve loadings of ones and zeros in the rows of the component matrix (1.0 & 0.0).

Equimax Rotation

Combines the objectives of both varimax and quartimax rotations

Orthogonal Rotation

Preserves the independence of the factors, geometrically they remain 90 apart.

Oblique Rotation

Will produce factors that are not independent, geometrically not 90 apart.

Rotation of the Eleven Variable Case Study

Method Varimax rotation. Below are the unrotated and rotated component matrices.






Did the Rotation Improve the
Factor Pattern?


Variable
Factor Effect on the
Loading

Sentence
I: no change
Increased
(.933 to .956)
Pr_conv I: no change Decreased
(.548 to .539)
IQ II: no change Increased
(.808 to .823)
Dr_score I: no change Increased
(.404 to .431)
Tm_disp III: no change Slight decrease
(.896 to .892)
Jail_tm I: no change Decreased
(.659 to .579)
Tm_serv I: no change Increased
(.907 to .945)
Educ_eqv II: no change Increased
(.935 to .942)
Skl_indx II: no change Increased
(.887 to .891)
Age I: no change Decrease
(.853 to .844
Age_firs I: no change Decreased
(-.581 to -.536)

Interpretation Not much change. The rotated pattern is not a substantial improvement over the unrotated pattern.

How Good a Fit is The Three
Factor Solution?


Measures of goodness-of-fit

KMO (0.698) mediocre to middling

Percent of variance accounted for 67.44%, the same for both the unrotated and rotated solutions

Communalities (the proportion of the variability in each variable accounted for by the three factors)

Ranges from a high of 0.917 for sentence to a low of 0.252 for dr_score

Factor patter Fairly clear for Factors I and II, ambiguous for Factor III

Reproduced correlation matrix One measure of the goodness-of-fit is whether the factor solution can reproduce the original intercorrelation matrix among the eleven variables.

Reproduced Intercorrelation Matrix





The upper half of the table presents the reproduce bivariate correlations. Compare these with the lower half of the table that presents the residuals.

Residual = (observed - reproduced correlation)

The diagonal elements in the upper half of the table are the communalities associated with each variable.

Over half of the residuals (52%) are greater
than 0.05

Computing Factor Scores


A useful byproduct of factor analysis is factor scores. Factor scores are composite measures that can be computed for each subject on each factor.

They are standardized measures with a mean = 0.0 and a standard deviation of 1.0, computed from the factor score coefficient matrix.

Application

Suppose the eleven variables in this case study were to be used in a multiple regression equation to predict the seriousness of the offense committed by parole violators.

Clearly, the eleven predictor variables are collinear, a problem in the interpretation of the extent to which each variable effects outcome.

Instead, for each subject, computed a factor score for each factor, and use the factors as the predictor variables in a multiple regression analysis. Recall that the factors are noncollinear.

Regression Analysis of Crime Seriousness With the Eleven Variables






Regression Analysis of Crime Seriousness With the Eleven Variables (cont.)






Interpretation

R2 = 0.696

Significant predictor variables: sentence on the previous offence, prior convictions, and previous jail time

The other predictors were not entered into the equation, not because they are unrelated to crime seriousness, but because they are collinear with the variables in the equation.

Regression Analysis with Factor Scores






Interpretation of the Regression Analysis With Factor Scores

R2 = 0.643 (previous regression model = 0.696)

ser_indx = 3.829 + 1.587 (F I) + 0.303 (F III)


F = Factor

Factors I and III were found to be significant predictors. Factor II was not significant.

Factor I is assumed to be a latent factor that measures criminal history, while Factor II appears to measure educational/skill level.

Since only one variable (tm_disp) loaded on Factor III, it is difficult to theorize about the nature of the latent variable it measures other than it measures time to disposition.

Comparison of the beta weights indicates that the criminal history factor (Factor I) has a greater effect in explaining the seriousness of the offense than does Factor III (tm_disp).

Không có nhận xét nào:

Đăng nhận xét