Assessment System and Unit Evaluation
2a. Assessment System
Evaluating and refining
assessment system. Since the 2002 NCATE Accreditation Visit,
the Unit assessment system has undergone changes, most notably the
adoption of PASS-PORT, the electronic system for assessing
candidates’ accomplishments. The series of review points begins
with a request to the assessment committee. This group comprised
of unit faculty, Director of Assessment, and community partners
reviews the request and prepares the proposal for submission to
the NCATE Steering Committee, which reviews the document before
sending it to the Dean’s Administrative Council (DAC). The DAC
disseminates the proposal, with a request for feedback, to three
groups: the Council for Teacher Education (CTE), faculty members,
and the professional community (e.g., middle school teachers).
Written comments are forwarded to the Assessment Committee, which
finalizes the document by revising the proposal or including an
addendum, and resubmits it to the DAC for final approval. At each
point, reviewers decide if the proposal should move forward or if
clarification or additional information is needed.
The Unit assessment system (Exhibit
2a.1 and
Exhibit 2a.2) is an ongoing and recursive process
for improving the Unit programs (Exhibit
2a.3). Any proposed change must progress through
the Unit assessment system for approval (Exhibit
2a.3). To
further refine the assessment process, revisions can be initiated
at any step in the process. The assessment committee meets
regularly to review data reports and suggestions from other
committees. At its meetings, refinements of the assessment system
are made by the committee, as noted in the minutes of meetings
(Exhibit
2c.2,
Exhibit 2c.3,
Exhibit 2c.4,
Exhibit 2c.5,
Exhibit 2c.6,
Exhibit 2c.7,
Exhibit 2c.8). The
results of the review and any changes resulting are disseminated
according to system procedures.
Individual faculty, school
partners, and program committees (e.g., literacy committee) may
propose a change, such as identifying a need to improve an
assessment instrument or procedures and informing the appropriate
committee of their recommendations. Rubrics (e.g., classroom
management assessment) approved through the assessment system
provide data on candidate performance. When the need for a change
is identified, a recommendation is submitted to the Director of
Assessment and Program Evaluation (hereafter the Director of
Assessment) who forwards it to the appropriate committee for
review and evaluation. While some artifacts and rubrics are
required for PASS-PORT, faculty can also create their own course
assessments in PASS-PORT. Validity and reliability of all Unit
assessment tools are continuously upgraded. The Unit assessment
system provides regular and comprehensive data on candidate
performance, Unit operations, and program quality. The Director
of Assessment and the assessment committee communicate with the
Director of the Office of Institutional Research and Assessment
(IRA) to manage activities and initiate changes (i.e., adding item
to a survey instrument) for Student Opinions of Teaching (SOTs)
and the Exit and Employer surveys. All other assessments are
administered through the process developed by the committee and
managed by the Director of Assessment.
Collection of information
on candidate proficiencies. The Unit assessment system
includes criteria to assess each candidate’s strengths and areas
for improvement. Considered collectively, results of the
assessment criteria indicate program strengths and areas for
improvement. The assessment system provides the means for the
collection of data on initial and advanced candidates as the
progress through a series of portals (Table 5). The Unit ensures
that the assessment system collects information on candidate
proficiencies by aligning program curricula, course requirements,
and assessments with the Conceptual Framework (CF), state (LDE)
standards, and professional (SPA) standards. Data are collected
on candidates’ knowledge of the CF, including technology and
diversity; national standards (INTASC, NBPTS), state standards (LCET),
program standards, and Candidate Dispositions. Candidate
knowledge, skills, and dispositions are assessed in each course
and across program levels. Candidates enrolled initial
certification programs and advanced programs in all departments
move through a series of portals to track their progress and
determine whether or not requirements at each portal are met.
Objectives for all professional courses are aligned with the CF.,
a description of which is included on all syllabi, and notations
for the CF elements accompany each objective are included in all
syllabi. Also, artifacts and assessments included in candidate
portfolios at three distinct levels must reflect relevant
standards.
Administration and
monitoring of key assessments. In addition to Unit
assessments, each initial and advanced program includes a unique
set of program and course assessments. Candidates enrolled in
initial certification programs demonstrate progress through
Portals 1-5, and candidates in advanced programs demonstrate
progress through Portals 6-10 (Major transition points for
candidate performance, Table 5:
Exhibit 2a.2). In initial and advanced programs,
candidates progress through portals (Exhibit
2a.3) by meeting a set of criteria, including Unit
assessments that are standardized across levels, programs, and
areas of concentration. Variability is introduced at the program
level through unique portfolio requirements for advanced
programs. PASS-PORT provides the means for the collection,
aggregation and disaggregation, and analysis of both individual
and group data. These data provide information to the Unit head
and program coordinators for forming, redesigning, or terminating
programs.
Most initial programs are
housed in T&L. The secondary programs and PK-12 programs in
kinesiology, music, and art are housed in partner colleges
identified in the overview. Areas of concentration within programs
are Early Childhood (PK-3), Elementary (Grades 1-5), Middle School
(Grades 4-8), Secondary Education & K-12, Special Education, and
Master of Arts in Teaching (MAT). Table 5:
Exhibit 2a.2
provides a detailed visual of transition points for initial and
advanced programs.
Fairness, accuracy,
consistency, and unbiased assessment procedures. The issues
of fairness, accuracy, consistency, and bias are addressed by
examining validity, reliability, scoring, and bias analysis of the
assessment instruments.
Validity and Scoring Validity
were addressed in the development of the assessment system, as
well as the development of criteria for assessing artifacts for
each program. The framework for the assessment system (Table 5)
was revised in 2003-2004 by the assessment committee to reflect an
operationalization of the CF; that is, the artifacts identified
for each portal represent ways by which candidates can demonstrate
proficiency in the CF elements. Within this framework, the Unit
and its programs include artifacts unique to the academic areas.
For example, the capstone artifact for the advanced programs in
the T&L and ELT is an action-research project, with content and
procedures unique to each program. Using this framework ensures
that all artifacts, regardless of the program for which they were
developed, reflect the CF.
Candidates’ artifacts are
scored using a rubric developed by program committees comprised of
faculty specifically responsible for the material being assessed
(e.g., faculty teaching a specific class from which an artifact is
submitted) or faculty with expertise in a specific area in
collaboration with partners. The criteria for each rubric are
consistent with requirements of the course(s) leading to
submission of the artifact, the professional standards related to
the CF and the artifact (e.g., LCET or ELCC). A common scoring
scale is used for all artifacts. Four points define the scale:
unacceptable, approaching expectations, meets expectations, and
exceeds expectations. Specific descriptions for each criterion at
each level of the scoring scale are provided (e.g., management
plan and lesson plan rubrics) in the PASS-PORT handbook, which is
accessible online. Each faculty member assessing an artifact has
attended PASS-PORT workshops and is familiar with the criteria,
scoring scale, and descriptions of work for that artifact.
Performance standards were
established to reflect acceptable levels of proficiency at each
portal and are based on the experiences of candidates working
within that portal/transition point. Different performance
standards exist for the same artifact at different portals; that
is, the acceptable performance standard for a candidate’s lesson
plan submitted at Portal 3 is “Approaching Expectations,” but the
acceptable performance standard at Portal 4 is “Meets
Expectations,” because the standard is higher at Portal 4 than
Portal 3. Candidates must meet the performance standards for any
portal before proceeding further. Artifacts and the scoring
criteria for them are reviewed by individual faculty each
semester. Suggestions for changes can be submitted to program
chairs for discussion at monthly department meetings before being
submitted to the assessment committee in the formal Unit
assessment process.
Each required artifact must
meet expectations on the respective rubric before a candidate can
be allowed to move to the next portal. Each assessment element
(i.e., the assessment of a candidate’s portfolio at a specific
portal) is scored on the same four point scale as individual
artifacts. The overall passing score is based on five criteria: 1)
the presence or absence of the required artifacts demonstrate an
acceptable level of performance, 2) an acceptable reflection
relating the artifact to the appropriate professional standards
and the CF, and 3) an understanding of program standards. The
descriptions of the assessment system and artifacts provide
evidence of content validity, which ensures that decisions or
inferences based on the results of any assessment are appropriate,
meaningful, and useful.
Reliability for scoring
artifacts is addressed through a systematic analysis of the
inter-rater reliability of the rubrics associated with artifacts.
This analysis of data examining the Lesson Plan and Classroom
Management Plan, both of which are artifacts at Portal 4 of the
Initial Teacher Education Program, began in Fall 2007, and is
currently in progress. An analysis for bias is being carried out
in conjunction with the test of reliability by examining the
results for the Lesson Plan and Classroom Management Plan across
gender and race.
Assessments used for
operations and programs management and improvement. All
assessments are used to measure, manage, and improve the
operations and programs at four levels: 1) the Unit, 2) the
program, 3) the course, and 4) the candidate. Results of a few
assessments and evaluations provide indicators of the Unit’s
overall effectiveness which are fair, accurate, and consistent.
The IRA conducts an exit survey of graduating seniors that
includes general items, as well as a set of items specific to each
program. The Office of Student Teaching also surveys candidates
at the end of their student teaching semester or internship year
(Exhibit 2a.5: Basic Program Follow-up). Because candidates are
also surveyed at the completion of their program, both surveys
provide evidence of program elements that have been effective and
those for which improvements are indicated. These assessments are
completed through electronic systems–the IRA survey through
PeopleSoft and the student teaching survey through PASS-PORT–that
allow the results to be aggregated and otherwise used to provide
longitudinal reports that track candidate progress. Results of
IRA's Current Student Survey administered to a random sample can
also be used each semester to measure program improvement and
institutional effectiveness. The Professional Attributes and
Characteristics Scale, a PASS-PORT assessment, is administered at
multiple portfolio portals, thereby providing individual, user
group, and program data to track candidate progress throughout the
initial or advanced programs. (Samples are located in the Exhibit
Room.)
The IRA collects and
aggregates data including SOTs, recruitment, retention rates, and
exit surveys. A survey of PK-12 consumers’ (e.g., parents,
teachers, students, school/district administrators) perceptions of
graduates’ skills is administered to population samples. The Unit
collects data on programs, reflections, and follow-up surveys.
The IRA and Unit collaborate to develop and improve instruments to
appraise the value of candidates’ activities.
2b. Data Collection,
Analysis, and Evaluation
Timeline for collecting
key assessment data. The IRA collects data each semester, but
data are aggregated annually for reports. Through PASS-PORT, data
on candidate performance are collected and aggregated at the end
of each semester. Data from any prior semester are available to
be aggregated or disaggregated whenever needed to generate reports
for review. PASS-PORT artifacts submitted by candidates are
assessed by faculty each semester. When artifacts for each portal
are completed and uploaded, candidates’ portfolios are assessed at
the end of each semester by the faculty advisor to which they are
assigned. Both formative (e.g., artifact) and summative (e.g.,
portal) assessments are included. Representative samples of both
types of assessments for ELT and T&L are provided in
(Exhibit
2b.5,
Exhibit 2b.6,
Exhibit 2b.7,
Exhibit 2b.8,
Exhibit 2b.9,
Exhibit 2b.10,
Exhibit 2b.11,
Exhibit 2b.12,
Exhibit 2b.13,
Exhibit 2b.14,
Exhibit 2b.15, and
Exhibit 2b.16). Questionnaires are sent at the
beginning and/or end of each semester determined by the type, pre-
or post.
Process and timeline used
to collect, summarize, and analyze data. The formal process
for collecting, summarizing, and analyzing data for operations and
program improvement is detailed in the Use of Data for Program
Improvement flowchart (Exhibit
2b.1) and in Figure 1: Timeline of Program
Improvements Based on Assessments in (Exhibit
2a.1), the COEHD
Assessment System Description. Minimally, data are collected and
summarized once each semester. However, these procedures and
analyses can be undertaken at any time. Whenever a committee
(e.g., differentiated instruction committee) identifies a problem
based on a review of data in an assessment report, it develops a
proposal describing a change or refinement to be made. At this
point, the proposal is submitted to assessment committee to
initiate the process detailed in
Exhibit 2a.4, the COEHD Assessment System Chart. In addition to this formal system, informal procedures for
assessment are ongoing. When a faculty member or committee
identifies a question or area of concern, discussions ensue on an
informal basis until the point when a proposal is introduced into
the formal system.
Procedures for collecting
data. The parties responsible for collecting data include Dr.
Michelle Hall, Office of Institutional Research and Assessment
(IRA) Director, who is responsible for administering institutional
assessments; Flo Winstead, Director of Assessment; and the Major
Field Assessment (MFA) coordinators at the program level. These
participants collect and aggregate data through PASS-PORT and
Peoplesoft for reports.
Data are summarized and
analyzed in various formats. Through PASS-PORT, data for one
candidate, a user group, a program, or the Unit can be summarized
in a report or chart. The IRA reports data using PeopleSoft. The
Unit completes the MFA that is submitted to the IRA annually.
Exhibit 2b.18 includes the webpage listing the Unit
MFA reports and a sample of the M.Ed. Special Education January
2008 Results and Use Report.
Data are summarized and
analyzed annually and biennially by the IRA. Additionally, data
derived from PASS-PORT assessments are available each semester.
Data can also be aggregated for reports whenever a request is
submitted to the Director of Assessment.
Information technologies used
to maintain the Unit assessment system are PeopleSoft and
PASS-PORT. IRA collects and aggregates data used to document
progress and achievements and detect Unit and program areas for
improvement. IRA directs the collection of SOTs, faculty and
student surveys, and demographic data. All data are collected
electronically and available to administrators and faculty for
review. IRA surveys and reports are available at
http://www.selu.edu/admin/ir/index.html. The Unit employs
PASS-PORT to collect and aggregate data on candidate performance
and faculty qualifications. Beginning in Fall 2004, candidates in
initial programs and beginning in Fall 06, candidates in advanced
programs have used PASS-PORT as the vehicle for portfolios.
Records for formal
candidate complaints and resolutions. The University’s
student handbook outlines the process by which candidates can
lodge a formal complaint. Initially, a formal complaint by a
candidate is communicated in writing to the faculty member. If
there is no resolution, the candidate submits the written
complaint to the faculty member’s department head. When the
department head and candidate are not able to achieve closure, the
COEHD or partner college Dean is presented with the candidate’s
written complaint and support documentation. Procedures for
specific complaints (i.e., sexual, racial, and gender harassment;
disability discrimination; and academic issues) are also outlined
in the Student Handbook. Charts showing the format for the types
and the number of complaints are found in
Exhibit 2b.2 and
Exhibit 2b.3.
At the conclusion of each
academic year, complaints and resolutions are collected and
compiled by the Director of Assessment. The Director sends a
chart noting the number and types of complaints, with a summary
statement on the predominant complaints, to the COEHD Dean, who
then forwards the information to the DAC for discussion. Actions
are taken as deemed appropriate and necessary. In addition to the
formal complaints and resolutions procedure, candidates can voice
concerns or lodge complaints anonymously on the SOTs each
semester. Data are collected by the IRA, and a report on each
faculty member is forwarded to the faculty member and the
respective department heads. The faculty member and department
head review the documents and use the information for each faculty
member’s end-of-year performance assessment. Records of
complaints and outcomes are kept on file in the Dean’s office.
2c. Use of Data for
Program Improvement
In regard to candidate
performance on the main campus, at off-campus sites, and in
distance learning courses, assessment data indicate no significant
differences across programs, courses, and sections. The IRA and
the Director of Assessment can provide disaggregated data for
candidate performance in courses offered on the main campus, at
off-campus sites, and in distance learning courses. One or more
sections of some courses are offered off-campus or 100 % online
(distance learning), but no undergraduate initial certification or
advanced programs offer all courses online or off-campus. The
Master of Arts in Teaching (MAT) initial certification program,
which has been redesigned, began accepting students in Summer
2007. No courses will be 100 % online. In the MAT program that
is being phased out, candidates could complete some or all of
their courses online if they met certain criteria (e.g., employed
as a full-time teacher in the certification area). The ESL
concentration provides all ESL courses online.
Policies and procedures for
data collection have been established (Exhibit
2c.1), and candidates and faculty use data to
improve their performance. Candidates’ work in courses and FXs is
assessed throughout their programs. To progress through their
program, candidates must satisfy criteria for one portal of the
assessment system before moving to the next. Additionally,
artifact and portfolio assessments provide candidates with
feedback designed to help them become more effective
professionals. Assessments created or adapted by program faculty
are used to identify needed improvements. The Prospective
Education Candidate surveys, completed by initial (PEC) and
advanced (Advanced PEC) candidates, reflect the CF. Candidates’
performance on PRAXIS tests is also reviewed to identify areas for
improvement. When a deficit is identified in a candidate’s
performance, individual candidates can avail themselves of
services through the
Teacher Development Program to help them improve their
performance.
Data are also used to help
faculty improve their performance. The IRA collects and
aggregates SOT data, which are used to facilitate the improvement
of faculty performance. Faculty also develop an annual
professional development plan to guide their work and identify
achievements for the end-of-year assessment for review with the
head of their respective departments. Data are used initiate
program and Unit changes on a regular basis.
Multiple data sources are
used to improve Unit programs. Based on results of evaluations,
program changes are initiated. By reviewing IRA and Unit reports,
committees and individual administrators and faculty identify
effective and ineffective aspects of the Unit and its programs
from formative and summative evaluation reports. Therefore, from
the outset, decisions to maintain or revise programs or procedures
are data-driven. Suggested changes are formalized and linked to
the Unit’s CF, SPA, and state standards, and follow procedures
outlined in the Unit assessment system. The system includes
assessment of specific tasks and requirements. Data collection
and analysis provide the information needed to make sound
decisions about curriculum, instruction, and candidate
performance. There is evidence that candidates entering the
program will be prepared in terms of the knowledge, skills, and
dispositions necessary to perform effectively in their profession
and ultimately, to positively impact student learning.
Changes based on data derived
from assessments have occurred over the past three years and are
initiated as indicated by Unit assessment system evaluations.
Changes are then systematically linked to the CF and studied to
measure effects on teacher education and other professional
programs. Also, since the last NCATE review, the initial
certification, alternative certification, and advanced programs
were redesigned to meet state guidelines and to reflect needs
identified by the Unit. Data derived from PASS-PORT and the IRA
were reviewed to identify areas for improvement in programs and
courses. Specific examples of data-driven change are contained in
SPA reports and in Exhibit 2c.4: Data-driven changes over the past
three years.
For annual reviews of faculty
performance, the department head and faculty member ascertain
areas for improvement. Faculty then develop ways to improve their
performance in the area identified. One change has been the
addition of more courses offered online. Following the redesign of
the undergraduate, initial certification program, for example,
sections of EDUC 201 and 211 have been offered online to meet
candidate needs. Modifications in both initial and advanced
programs focus on diversity. On student teaching exit surveys,
candidates stated that they wanted to be better prepared to meet
the needs of diverse populations in inclusive settings. Several
examples of data-driven changes regarding diversity are contained
in
Exhibit 2c.9.
Assessment data are shared
with candidates, faculty, and other stakeholders in various ways.
Reports are shared with faculty at regular COEHD meetings (e.g.,
Dean’s Advisory Council), departmental monthly meetings, and
committee meetings. Each semester, data is disseminated when the
Unit convenes to recognize candidate and faculty achievements and
to hear updates pertaining to the Unit and programs. At the
beginning of each academic year, administrators, faculty, and
staff meet at a convocation, at which time the University
president gives the state of the University presentation and
faculty and staff are recognized for special achievements (Exhibit
2c.12). Candidates are apprised of assessment data at
meetings of student education organizations, and candidate
representatives may serve on Unit committees. The public is
notified of information on education in the local newspaper and in
the University radio and television stations.
The IRA Director works with
administrators and faculty to create assessment tools used to
conduct surveys; collect, aggregate, and analyze data; and
generate reports, thereby providing scientifically measured
outcomes on these valid and reliable instruments. The data are
used to create reports that are disseminated to stakeholders and
posted on the University
website to other interested parties. If other data are
needed, individuals can submit a request to the IRA Director.
Another noteworthy accomplishment of the Unit is a research
project on the inter-rater reliability of the rubrics associated
with assessment of PASS-PORT artifacts. This analysis of data
examining the Lesson Plan and Classroom Management Plan, both of
which are artifacts at Portal 4 of the Initial Teacher Education
Program, began in fall 2007.
The Unit is particularly
attentive to continuous improvement resulting from the review and
analysis of data. Faculty and administrators regularly critique
assessment instruments and review assessment results to identify
program elements that are effective and those for which
improvements are needed, primarily to better meet the needs of
candidates. Faculty are required to participate in monthly COEHD
faculty or program meetings, but groups of faculty choose to form
formal or informal ad hoc committees to discuss a specific topic,
work on a project, or plan actions to address a problem.
Data are reported at regular meetings, and most groups in which
faculty choose to participate originate because of interest in
assessment findings.
|