Abolishing the ACT and SAT

Chris Streetman

          

Abstract

Colleges often use the ACT or the SAT to determine whether to admit the student or to determine how much scholarship money they will award the student. Colleges on the east and west coasts use the SAT. Other colleges use the ACT. The Oxford Dictionary defines the SAT as �a Scholastic Aptitude Test, a test of a student�s verbal and mathematical skills, used for admission to American colleges� (Oxford Dictionaries). The ACT tests, as defined by �Data on Student Preparation, College Readiness, and Achievement in College,� �seek to predict how current students will perform in courses commonly taken by new college students� (24). Both tests are considered by most colleges a valid assessment. However, other sources beg to differ. These dissenting sources would most likely support the view that standardized tests, such as the ACT and SAT, should be abolished because they lack reliability, perfectly consistent administration, and content validity.   

 

 

             Many high school students spend the spring semester of their junior year preparing to take the ACT or SAT. Colleges often use one of these two tests to determine whether to admit the student or to determine how much scholarship money they will award the student. Colleges on the east and west coasts use the SAT. Other colleges use the ACT.

            So what are the ACT and SAT? The Oxford Dictionary defines the SAT as �a Scholastic Aptitude Test, a test of a student�s verbal and mathematical skills, used for admission to American colleges� (Oxford Dictionaries). The ACT tests, as defined by �Data on Student Preparation, College Readiness, and Achievement in College,� �seek to predict how current students will perform in courses commonly taken by new college students� (24). Both tests are considered by most colleges a valid assessment. However, other sources beg to differ. These dissenting sources would most likely support the view that standardized tests, such as the ACT and SAT, should be abolished because they lack reliability, perfectly consistent administration, and content validity.   

The ACT and SAT are both standardized tests, and standardized tests have many terms to define when assessing their value. The first factor is reliability. James Popham says, �With respect to education assessment, reliability equals consistency� (77). One factor to consider when one looks at a test�s reliability is the standard error of measurement. Popham claims, �Standard errors of measurement, which differ from test to test, are similar to the plus-or-minus margins of error accompanying most opinion polls� (78). To put it simply, the standard error of measurement determines the range of possible accurate scores on a given area or subarea of the test. This means that if the student took the test again he or she could just as easily earn a different score in this range.

But the reliability of these test scores affects more than just the individual. Standardized tests are used to determine if schools stay open as well as the amount of funds they receive. Dr. Audrey T. Edwards discovered this fact when she took a sabbatical at a public high school to learn about some of the difficulties high school teachers face. Her colleague Judith J. Pula interviewed her afterwards. Edwards claims, �A single test score cannot possibly measure a school�s effectiveness, yet schools� funding and reputations rest on test results� (Edwards and Pula 13).

Similarly, in the Midwest, the ACT is used to determine a student�s potential for academic success in college as well as students� acceptance at college. However, the ACT does not necessarily measure progress. The Oxford Dictionary defines progress as a �forward or onward movement toward a destination� or a �development towards an improved or advanced condition� (Oxford Dictionaries). The ACT provides a single score on one test taken on one day of a student�s life, measuring only the student�s academic achievement and aptitude on that day. Likewise, the SAT measures a student�s aptitude for college, but fails to measure progress.

One article titled �Everyday Courage in the Midst of Standardization in Schools,� seeks ways to be free from these standardized tests without putting schools in jeopardy by decreasing funds or reputation. The author Frank Pignateli suggests:

Highstakes testing, tightly scripted curricula, fixed blocks of instructional time, and all the associated practices and policies that drive toward uniformity and sameness regardless of interest, need, and the best judgment of educators situated in specific contexts present a danger and cause suffering. (2)

This implies that standardized tests are not the most reliable assessments, yet the state determines funds for schools based on those standardized tests. Teachers face an ethical dilemma. Do they spend their time preparing students for the ACT and SAT to increase school funds and reputations or do they stick with the normal curriculum? Thankfully, teachers have the option of a middle ground. Many schools offer prep courses for standardized tests. Since these courses occur outside of normal school hours, teachers do not have to cut back on lessons to accomplish scores high enough for the school to receive sufficient funds. Unfortunately, this means that the teachers of these prep courses must commit to working overtime.

 Another factor to consider when analyzing a test�s value is the way the test is administered. In order to achieve any level of accuracy or reliability, the test must be administered exactly the same throughout the nation. This may seem simple and straightforward, but in reality it is actually quite difficult to enforce. Genevieve Hodgi Gay conducted a study on irregularities that occur when administering standardized tests (Gay). She found, �A survey of 168 teachers and interviews with 8 regional research/test coordinators in North Carolina found that incidences of inaccurate timing, altering answer sheets,coaching,teaching the test,errors in scoring/ reporting, and student cheating do exist� (Gay par. 1). So even in the strict atmosphere of the test rooms, students might get away with cheating, or some of the teachers may change the answers later.

What would possess them to do this? Gay states, �Teachers experience (p)ressures from administrators, peers, and parents for students to excel on standardized test [sic]� (par. 1). This goes back to Edwards� claim about school funding and reputation being dependent on test scores. Even on her sabbatical she found, �The school had failed to make Adequate Yearly Progress (AYP) in reading the previous year, so it had to raise the percentage of students who passed this test. Given limited resources, the surest way to raise this figure was to coach those who had failed a practice test by 5 points or less� (Edwards and Pula 2). Teachers coach the test to ensure their schools receive an adequate amount of money to keep the school open. In this position, the temptation to embellish their students� scores would certainly be present. Perhaps the teachers thought they could increase school funding if they improved their students� scores, or perhaps they thought the school�s reputation would decrease if the scores were left as they were.  The problem is it is very difficult to prove that cheating occurs unless the teachers come forward and confess their cheating like they did in Gay�s survey. If students take the test at a different school, cheating may be even harder to track.

The third factor is the validity or, more specifically, the content validity. Andrew T. Roach, Stephen N. Elliott, and Norman L. Webb state, �The alignment between an assessment and the content it is meant to assess is an important piece of evidence in any validity argument� (219). So what are standardized tests supposed to assess? They are allegedly supposed to test students� aptitude for success in college, but can these tests accurately measure potential if students are being tested on concepts they have never learned? None of these tests actually look directly at what is being taught in schools. They all rely on other standardized tests, like the ISAT or NAEP, to determine the basic curriculum for their tests. If these tests are inaccurate, the entire system is flawed.

Another way that these tests fail to achieve content validity is the way they phrase the questions. One may consider the following question from an online ACT English practice test:

Our household might have been described as uncooperative. Our meals weren�t always served in the expected order of breakfast, lunch, and supper.

4. Which choice would most effectively introduce the rest of this paragraph?

F.NO CHANGE

G. There seemed to be no explanation for why Mom ran our household the way she did.

H. Our household didn�t run according to a typical schedule.

J. Mom ran our household in a most spectacular manner. (The ACT)

This is more a matter of opinion than an assessment of English. More than one of these options could effectively open the paragraph. So, the answer depends on the stylistic preference of whoever created the test question. Even a professor who retook the SAT claims, �I teach writing and journalism, yet I found some questions were written so awkwardly � although they were grammatically correct � that I wanted to take a red pen to them and demand that they be rewritten� (Harper and Vanderbei 1). If ambiguities like these examples show up constantly in the test, the student could lose several points even though he or she is smart enough to figure out the answer if the question is worded clearly. Yet the student loses points for the test writer�s errors. Not every question is this ambiguous, but one or two points could make a difference in a student�s life. Furthermore, those points lost in ambiguity add up quickly especially when added to the points the students legitimately missed because they did not know the answer. Those lost points could prevent a student from entering one or more schools of their choice as well as limit their eligibility for some scholarships.

            Some people could argue that the students could retake the test to raise their scores. But if students retake the test, are their scores still valid? The test is designed to measure aptitude and achievement. After students take the test once, they know what to expect and how to get around it. Robert J. Vanderbei and Christopher Harper are two professors who retook the SAT as an experiment and found:

The College Board itself embraces the notion of teaching to the test, evidenced by the fact that it encourages students to take  practice tests and even the full SAT multiple times. If secondary-school educators could be kept in the dark about the content of the exam, and if all students were to take the test cold, I'm sure the SAT would provide valuable information to college admissions offices. (4)

 Students retaking the tests would skew the data unless every student in that district retakes the test. But some students can barely afford the test the first time and do not have the option of retaking the test. Furthermore, if students know what is on the test, it becomes less of a measure of aptitude and more of a measure of how well the students studied for the test. If these tests are really supposed to measure aptitude, is it fair to produce practice tests? Even from the practice test, students can form an idea of what will be on the actual test. If students study for the ACT or SAT, can their score be considered a valid measure of their aptitude or is it actually a measure of how well they studied for the test?

            When Harper retook the SAT, he discovered that all his knowledge of math does him little good:

While a few of the math questions were relatively straightforward, most of them were so convoluted that they seemed intended to trick me rather than to test my knowledge of arithmetic, algebra, or geometry. As some observers have noted, the section doesn't test a knowledge or understanding of math so much as how well one has learned �SAT math.� (Harper and Vanderbei 1)

This is another example of how studying for the ACT or SAT could give students an unfair advantage. As Harper said, he was being tested on �SAT math.� When studying for either test, the students might learn how to apply certain math formulas, but what they mostly learn is how to guess. Even one of Harper�s colleagues admits that his methods for studying for the SAT are not appropriate for the classroom:

Dana Mosely, a math teacher who has created a series of DVD's to tutor students for the SAT, has said that in an actual classroom he would never use many of his suggested methods, such as simple guessing by elimination, and plugging in the answers from the choices rather than performing the math to come up with the correct answer. (Harper and Vandebei 2)

These examples suggest that the SAT lacks content validity. The methods of math taught in a classroom are so different from the methods to pass the SAT that they are almost adversative to each other. This applies to more than just math. Standardized tests and schools are supposed to work together, not oppose each other. Whatever knowledge or application students have obtained in their math classes does not do much to raise their SAT scores. They have to go against almost everything they have been taught about solving problems if they want to do achieve high SAT scores. The SAT itself, whether intentionally or inadvertently, encourages only one math method: trial and error.

            This would be fine, except for one crucial detail. Both the ACT and SAT allow only a certain amount of time to complete each section, and trial and error is one of the most time-consuming ways to solve a problem. On the other hand, there are only four or five choices to plug into the problem. But if students try the right answer last, they may not have enough time to finish the last question on the test. This trial and error method turns the SAT and ACT into games of chance, rather than tests of skill. In order to succeed on these two standardized tests, students have to relearn math based on the standards of the SAT or ACT, instead of on the standards of the math curriculum.

            So how do schools fight back against these standardized tests. Pignatelli offers a few solutions, the first being:

Educators could decide, for example, not to give the tests. Parents and students could choose, as some have, to boycott the tests. But this position holds substantial risk for the school, the educators and, most importantly, the children and families and could easily result in having fewer options and exacting even more suffering. (2)

This option is clearly too radical to be an effective method, especially when other schools or other students choose to take the test. Such an act may result in a school being permanently closed. Pignatelli also offers the option of compliance:

Conversely, one could concentrate most of one�s efforts in preparation for the test to assure greater levels of mastery across the school population. But unquestioning compliance risks enlarging the surface of vulnerability, embeds further the harm wrought by a regime of standardized testing, and shifts the meaning and value of schooling to severely limited, questionable ends. (2)

Not fighting back at all is definitely a feasible option, but it is just as ineffective as the extreme choice to boycott the tests. So what are the schools to do? There must be a middle option. While prep courses help the students score well on the SAT or ACT without taking away valuable class time, they are still a method of compliance. How do teachers and students fight back without endangering their school? Perhaps, teachers could publish journal articles displaying the absurdity of using standardized tests as a measurement for a student�s capabilities. Unfortunately, even this method has proved futile, even with undeniable evidence.

            But maybe convincing the test-makers is where advocates against standardized testing fall short. Instead of trying to convince the test-makers, teachers and students and all those opposed to the ACT and SAT should try to convince colleges of their meaningless data. The process would take time but if every college ignored ACT and SAT scores, the number of students taking the test would gradually decrease. Theoretically, the number of students taking the test would reach zero. If that happens, there would be no scores to base schools� funds or reputations on. The funds would then have to be decided on Adequate Yearly Progress. This is still standardization, but at least it is a step in the right direction.                    

The ACT and SAT are made to determine academic achievement or aptitude. Unfortunately, this is not always the case. Some factors that come into play when determining the value of these standardized tests are reliability, administration, and content validity. Any anomalies that occur during or after the test could lead to inaccurate scores on students� tests. Content validity shows how well the test items match with the purpose of the assessment. Another problem with these two tests is speed as a factor. Students must complete a certain section in a limited amount of time, or the unanswered questions will be counted wrong. The ACT and SAT are affected by all of these factors, meaning that they are not necessarily the best or most accurate measurements of student achievement.

 

 

 Works Cited

ACT, The. ACT, Inc.: 2012. Web. 16 April 2012.

"Data On Student Preparation, College Readiness, And Achievement In College." Peer Review (2007): 24-25. Academic Search Premier. Web. 1 Mar. 2012.

Edwards, Audrey T., and Judith J. Pula. "Back To High School: A Teacher Educator's Hands-On Encounter With The Pressures Of High-Stakes Testing." Delta Kappa Gamma Bulletin (2011): 11-14. Academic Search Premier. Web. 1 Mar. 2012.

Elliott, Stephen N., Roach, Andrew T., and Webb, Norman L. "Alignment Of An Alternate Assessment With State Academic Standards: Evidence For The Content Validity Of The Wisconsin Alternate Assessment." Journal Of Special Education 38.4 (2005): 218-231. Academic Search Premier. Web. 5 Apr. 2012.

Gay, Genevieve Hodgi. "Standardized Tests: Irregularities In Administering Of Tests Which Affect Test Results." Journal Of Instructional Psychology 17.2 (1990): 93. Academic Search Premier. Web. 16 Apr. 2012.

Harper, Christopher, and Vanderbei, Robert J. �Two Professors Retake the SAT- Is It a Good Test?� The Chronicle of Higher Education. 55.39 (2009): 30-31. Web. 16 April 2012.

Pearsall, Judy. Oxford Dictionaries. Oxford University Press. 2012. Web. 25 February 2012.

Pignatelli, Frank. "Everyday Courage In The Midst Of Standardization In Schools." Encounter 23.2 (2010): 1-4. Academic Search Premier. Web. 5 Apr. 2012.

Popham, W. James. "Unraveling Reliability." Educational Leadership (2009): 77-78. Academic Search Premier. Web. 1 Mar. 2012.

 

 

   

©