Industry Modeled Assessments for Authentic Learning

Image captured from the ICLE Website.
Image captured from the ICLE Website.

In these austere times, schools are feeling the effect of the slowing or stoppage of public funds. Even though there is a temporary kink in the supply chain, there is a still a societal expectation that schools must be willing to do more with less in preparing students for the advanced career skills required in the 21st century.

One direction that many schools take to implement new State standards, which are meant to increase rigor, is to purchase pre-packaged curriculum materials from commercial vendors. So often, it is expensive to purchase new pre-packaged programs/workbooks that, when used in isolation of authentic assessment practices, make learning artificial by removing it from direct applications with the real world. These “canned curriculae” promote a one-size-fits all approach that can devalue the professionalism of the teaching staff. It is not hard to see the popularity of this approach given the prevailing metaphor that public schools are tiny businesses that manufacture human capital. By creating a uniform assembly line, the raw materials all undergo a linear process of change that is easy to implement and control.

In an effort to transform educational institutions to better serve the students and faculty, it is time to adopt a new metaphor for the learning process. Schools should be viewed as a Guild of Skilled Craftspeople that serve unique geographical communities. One needs only to look at the economic landscape of of the community it serves for the answers to relevant curricular assessment. Many district’s include phrases about creating partnerships with the community in their mission statements, but I challenge districts to go deeper when forming these community relations. Invite local business, industry, and social services to participate in the assessment development process.

Teachers are professionals in the education of children and experts in their chosen academic discipline. It is important to let them have a voice in creating the learning plans of their classroom. In an effort to create small works of curricular crafts, they must be given the right inspiration and time to collaborate with the right people. Local commerce has always relied on schools to educate and train the next generation of workers. By inviting industry professionals to the table when educators are creating authentic assessments, the school and community are truly working together for the educational benefit of the young learners. It is important for both institutions to claim sponsorship for the authentic assessment materials by placing their respective logos on the document. Sharing ownership of the learning outcomes may result in industry providing “real world” materials and tools for students to use in the classroom. Cash-strapped districts can know use their new assessments and learning materials to trigger inspiration at the classroom level.

When schools can develop authentic assessments that mirror the spirit of the local cultural, learning will reach a level of relevancy that allows students to develop strong community relationships while achieving highly rigorous academic goals.

Yours in Education,

Dr. Gregg McGough, CRI Blogger/CRI Podcaster

Every Child Succeeds Act: Changing the Policy of Standardization

Image captured by G. McGough
Image captured by G. McGough

As of December 10, 2015, President Obama signed the Every Child Succeeds Act into legislation thus replacing the No Child Left Behind Act, President George W. Bush’s spin on President Johnson’s Elementary and Secondary Education Act (1965). After several decades of legislation, the federal government is finally retreating from educational policy management and is looking to States to take a more active role in the governance process of public education.

Education Week (January 4, 2016) published a wonderful article titled, Will States Swap Standards-Based Tests for SAT, ACT? One of the benefits of having a blog is the ability to take one’s musings publish them to start a virtual conversation. Take a moment to click the hyperlink and go read the article. I will wait…

Please feel free to leave comments at the end of the blog so that the conversation can continue.

The article claims that seven states are looking to abandon their current, high school standardized testing practices and sub-contract the process out to SAT or ACT, to determine college readiness. As a strong advocate for college and CAREER readiness, I wondered why states didn’t include other possible CAREER assessment measures like the National Occupational Competency Testing Institute. If states are truly been given the freedom to redefine what high-school testing looks like, why not begin to give credibility to CAREER readiness. Career assessments demonstrate student competency in the areas of job and task-based analysis.

In the hierarchy of academic disciplines, the label “career prep” has a connotation as being lesser than “college prep.” It is time to elevate “career readiness” at this critical change in educational policy.

Schools could be designed to have students demonstrate their college and/or career competencies in one of two tracks, SAT/ACT or NOCTI. This type of differentiation might trigger the type of education reform that allows all students to truly find success in the training of their vocational track.

Please leave comments, so that we can clarify our thinking about this groundbreaking topic and possibly push for reform.

Yours in Education,

Dr. Gregory M. McGough, Blogger & Podcaster

Using Data to Overcome Reading Obstacles in Career Readiness

Reading is a subject and skill set taught throughout lexile-levelthe elementary years. However, reading, per se, is seldom taught at the secondary level. Many of us would agree that as textbooks and other assigned reading materials grow in terms of difficulty, vocabulary, and structure, it is critical that we collectively help our secondary students continue to refine their reading skills and strategies, as a mechanism to help prepare them for college and career success.

But, how do we go about doing that when we know that students read on such varied levels of reading proficiency? The best starting place is to determine the level at which each individual student is reading. The use of Lexile measures can pinpoint with great accuracy and individual student’s reading level. These data can then be used to guide the development and implementation of appropriate reading instruction in order to capitalize on the reading strengths and deficiencies possessed by students across a class or course. Engaging in these types of student reading assessments are growing in importance as the Common Core State Standards are requiring students to be able to read at higher, more advanced levels.

Once these reading levels have been determined, educators can adjust reading materials to correspond both to current reading levels and the desired goals as outlined by the Common Core State Standards. Then, both pre-reading and post-reading comprehension strategies can be incorporated in order to increase reading comprehension of more advanced reading passages.

In this corresponding chapter in the handbook, we provide information, resulting from research conducted by MetaMetrics, regarding the desired reading levels (measured in Lexiles), in addition to those that are suggested by the authors of the Common Core State Standards. For more information about teaching to differential reading levels, and to read the entire chapter titled, How Can You Teach Students Who Read At Different Levels?.

 

The Challenge of Grading Student Performance

Grading has historically implied the determination of whether a grades2student response is correct or incorrect. Averaging correct response often determines an overall grade.  To evaluate  student work on performance assessments—where student skills and capabilities can be directly observed—teachers often use rubrics. These rubrics are much better in giving detailed feedback on the level of performance to students, teachers, and parents. Further, rubrics are much more appropriate where the application of knowledge and skills serves as the focus of the assessment. A challenge for teachers is how to give feedback on performance and still record some student grade.

As opposed to dichotomous scoring, rubrics allow for the “classification” of student performance (e.g., skill mastery, comprehension, competence, etc.) along some sort of well-defined continuum. These various continua might be based quality of performance, frequency of performance, depth of understanding, or numerous other types of performance descriptors. Regardless of whether analytic, holistic, or mini-rubrics are being used, these continua allow teachers to communicate about specific student strengths and deficiencies—something that the typical pencil-and-paper assessments simply cannot do.

However, that being said, care should be taken when developing rubrics. The language must be clear and the descriptions of adjacent levels of specific performance indicators should not “overlap” in any way. This will undoubtedly lead to confusion on the part of the student (or parent) when reviewing the results of a performance assessment rubric completed by the teacher. Honestly, this can also result in confusion in the mind of the teacher, if clarity is not built into the rubric.

Rubrics are fabulous tools that enable teachers to provide substantive and meaningful feedback to students. However, they must be carefully designed in order to accomplish this goal. For more information about designing and using scoring rubrics, and to read the entire chapter titled, How Do You “Grade” Student Work On Performance Assessments?.

Remember That Every Child Learns Differently…

In an earlier blog, we discussed the use of student data to help inform decisions about revising group-level instruction. The process of interpreting assessment data to guide individual student interventions isstudents different faces very similar to the process for revising group-level instruction, beginning with and then targeting content or skill areas where students are noticeably deficient. Similar cautions are important here. For example, educators should make sure that they rely on multiple measures of student proficiencies and capabilities. It is important to look at the results of standardized assessments, as well as classroom assessments.

This being said, educators must be careful to avoid the over-interpretation of standardized test results, especially when using those data to inform individual student interventions.  For example, on a subtest with five items, a student may answer three of the items correctly and perhaps be careless in responding on one item and omit another one. This student’s “proficiency” on that content would be reported as 60%, which most educators would initially interpret as poor understanding or mastery. Of course, we likely do not know why one item was missed (a careless mistake or not?) and the other omitted (did the student inadvertently skip the item or not understand the material at all?). Therefore, it is likely more appropriate to interpret raw scores, as opposed to any other standardized score, such as percentile ranks or even percentages of items answered correctly. Educators should practice the same cautions when interpreting the results of localized classroom assessments.

Utilizing student data for purposes of truly informing the wide variety of decisions that educators are charged with making is a practice that should be a routine part of educator’s “toolbox”. The consequences of making inaccurate decisions about students are simply too grave; they cannot and should not be taken lightly. For more information about using data to help guide the design of individual student interventions, and to read the entire chapter titled, Once You Know Who’s Not Learning, How Can You Help Individual Students?.

I Hope This Isn’t Like Rocket Science!

How Can You Tell What Your Students Aren’t Learning And What Do You Do Once You Know That?

Data-driven decision making has been at the top of the buzzword list in education for the past several years. However, we believe that it is no longer just a “buzzword”—it now encompasses a critical skill set that all educators are responsible for developing and using as part of thekeep-calm-it-s-not-rocket-scienceir professional practice. The thought of working with standardized test and other sorts of assessment data is oftentimes overwhelming to many educators. However, rest assured that the process is truly not rocket science. It will, however, impress your friends and colleagues!

Engaging in a process of data-driven decision making essentially involves looking at multiple sources of student assessment data, reflecting on how and where the assessed knowledge and skills have been taught, and then revising instruction and assessment methods in order to address areas where students may not have performed as well as expected. In a nutshell, the processes really involve making use of student data that you already have likely collected, so there’s no additional work with respect to that part of the process. During these processes, is important to be mindful of the importance of collecting and using multiple measures of student performance in order to get a well-rounded picture of student performance on targeted knowledge and skills.

Being comfortable with and learning how to engage in data-driven processes is in essential skill for the 21st-century educator. It is critical that neither teachers nor administrators shy away from exposing themselves to these critical skills. For more information about the processes of data-driven decision making and to read the entire chapter titled How Can You Tell What Your Students Aren’t Learning And What Do You Do Once You Know That?