Electronic Theses and Dissertations

Identifier

681

Date

2012

Document Type

Dissertation

Degree Name

Doctor of Philosophy

Major

Psychology

Concentration

School Psychology

Committee Chair

Randy Floyd

Committee Member

Charles Blaha

Committee Member

Beth Meisinger

Committee Member

Gilbert Parra

Abstract

Emphasis on regular mathematics skill assessment, intervention, and progress monitoring under the RTImodel has created a need for the development of assessment instruments that are psychometrically sound, reliable, universal, and brief. Important factors to consider when developing or selecting assessments for the school environment include what skills are assessed; mathematics curriculums typically include computation and applications as separate skills taught in sequence. It is also important to consider what additional factors may potentially influence performance on such tests due to the nature of test administration and characteristics of the test items. The current study investigated the construct validity of established, widely-used curriculum-based measurement (CBM)tests and standardized, norm-referenced tests of mathematics as well as the potential confounding influence of processing speed and reading abilities. Construct validity of the tests was assessed through an investigation of convergent and discriminant validity, using confirmatory factor analysis (CFA). Numerous prespecified, theoretical models were tested to replicate previous studies suggesting specific models of mathematics ability (convergent validity) and to identify construct-irrelevant variance (discriminant validity) imposed on tests of computation and applications by processing speed and reading. The current study extended previous work in the area of mathematics providing additional evidence for a two-factor structure of mathematics with Computation and Applications as distinct, yet related constructs and investigated the relations between mathematics constructs and processing speed and reading. Results of the current study indicated all constructs were significantly correlated with each other while mathematics constructs were more highly correlated with each other than with unrelated constructs, with the exception of Applications and Reading. Four a priori models of mathematics ranging from including a single factor to including four factors were tested using CFA. Results indicated that a four-factor model of mathematics including Computation, Applications, Processing Speed, and Readingas factors was the best-fitting model. The four-factor model was extended to test the construct-irrelevant variance imposed by Processing Speed on fluency-based tests as well as variance imposed by Reading on applications tests. Results indicated that in all but one case, no significant influence was contributed to fluency-based tests by Processing Speed or applications tests by Reading.

Comments

Data is provided by the student.

Library Comment

Dissertation or thesis originally submitted to the local University of Memphis Electronic Theses & dissertation (ETD) Repository.

Share

COinS