Interview with Reid Lyon: Reading First is the largest concerted reading intervention program in the history of the civilized world.

Michael F. Shaughnessy
Senior Columnist
Eastern New Mexico University

Question 1: The Reading First Impact study has recently been released. In a previous interview you had predicted that the Department of Education's Evaluation would not show significant effects for the program because of a number of possible problems with the study. Can you elaborate on your initial concerns?

Yes, my major concerns are:

· The Impact Evaluation was delayed significantly in its design and implementation;

· The evaluation treats Reading First as both a policy and programmatic independent variable that did not lead to significant differences between Reading First and non-Reading First schools on a measure of reading comprehension. But it is what teachers do in imparting instruction that improves reading comprehension if the instruction is implemented with fidelity, not the overarching policy. As it stands now, we know very little about the impact of different instructional programs on the relationship between specific program characteristics and development of all reading components – not just reading comprehension.

· The probability that the sample studied in the Impact evaluation is representative of all Reading First schools is limited; Because of the delay, the evaluators could not randomly select a nationally representative sample. Far too few Reading First schools (119 from a total of 5,880) were studied given the overlap (contamination) between Reading First and non-Reading First schools with respect to what was occurring instructionally in the classroom.

· I share with Mike Petrilli from the Fordham Foundation the concern that the delay in designing and implementing the evaluation made it impossible for the states that won the first grants to participate in the study given that there programs had started before the evaluation was initiated. One could hypothesize that these states began their Reading First implementations with solid implementation plans in place.

· I also share with Petrilli the concern that the schools selected for study were the ones that just barely won grants under the program, which were compared to schools that just barely missed funding. (Schools are ranked according to various criteria, such as poverty, need, etc.). Petrilli points out that the schools where you would expect the greatest impacts from Reading First are the poorest ones, enrolling students who are further behind in reading–schools that would have been ranked at the top of the priority list. These schools weren't included in the study;

· Because of the delay the current impact study scope and depth departed significantly from the scope and comprehensiveness of the evaluation mandated in Section 1205 of the Reading First legislation. I have outlined the requirements of the evaluation mandated in congressional language in Section 1205 below;

· The findings of no significant differences in reading comprehension outcomes presented in the Interim Report are difficult to interpret. This is because, as noted earlier, many non-Reading First schools were implementing the same programs and professional development opportunities as the Reading First schools. This impact evaluation is not a true experiment which could have certainly been done given the tremendous financial resources allocated for the evaluation. As Tim Shanahan has pointed out, the comparisons made were not Reading First with non-Reading First schools, but Reading First with less-Reading First schools.

· The interim report does not provide evidence that the impact study examined specific relationships between each of the essential components of reading and overall reading proficiency as mandated in task 1 in Section 1205 of the law. All that is known from the interim report is the amount of time spent in instruction for each of the components. It is not known the extent to which limitations or strengths in each of the components influenced reading comprehension;

· The interim report does not provide evidence that the impact study measured the extent to which different materials (including assessments and programs) improved reading proficiency as mandated in task 5 in Section 1205 of the legislation. Thus, it is impossible to determine which students benefitted from which materials (assessments and programs) and under which conditions;

· The interim report does not provide evidence that the impact study measured whether specific assessment instruments and strategies helped teachers identify specific reading difficulties as mandated in task 6;

· The interim report does not provide data on the effects of professional development on the improvement of reading proficiency as mandated in task 7 of Section 1205 of the congressional language;

Question 2: You mentioned that the delay in initiating the design and implementation of the evaluation limited its scope and capability to produce interpretable findings and inform improvements. Can you explain this issue in more detail?

Yes.Bob Sweet and I felt that a comprehensive and rigorous evaluation of the Reading First program was essential. As such, we crafted 10 primary analyses and measurements that an independent evaluator was required to carry out (again see Section 1205 of the law). Specifically the law required:

1) An analysis of the relationship between each of the essential components of reading instruction and overall reading proficiency.

(2) An analysis of whether assessment tools used by State educational agencies and local educational agencies measure the essential components of reading. 

(3) An analysis of how State reading standards correlate with the essential components of reading instruction.

(4) An analysis of whether the receipt of a targeted assistance grant under section 1204 results in an increase in the number of children who read proficiently.

(5) A measurement of the extent to which specific instructional materials improve reading proficiency. 

(6) A measurement of the extent to which specific screening, diagnostic, and classroom-based instructional reading assessments assist teachers in identifying specific reading deficiencies.

(7) A measurement of the extent to which professional development programs implemented by State educational agencies using funds received under this subpart improve reading instruction. 

(8) A measurement of how well students preparing to enter the teaching profession are prepared to teach the essential components of reading instruction.

(9) An analysis of changes in students' interest in reading and time spent reading outside of school

(10) Any other analysis or measurement pertinent to this subpart that is determined to be appropriate by the Secretary.

No doubt, these are complex analyses requiring research designs and methods (including sampling strategies) appropriate to the evaluation targets. Because of the complexity of the required evaluation, the law provided $25 million dollars PER YEAR over a six year period (total = $150 million) to ensure that the evaluation research tasks could be accomplished. The amount of funds set aside for the external evaluation was arrived at following a survey of evaluation researchers who were asked to identify the cost of the most rigorous evaluation possible. Twenty-five million dollars per year was the arrived at figure. It is unclear how much the current evaluation cost nor is it clear where the bulk of the funds set aside for the evaluation were used. The law stated that such funds "may" be used for technical assistance and that could be the case.

I anticipate that many will argue that the scope the evaluation mandated in the legislation is far too ambitious for an external evaluation study. We wanted it to be very ambitious. But this was not to be as noted in this footnote of the report:.

"The Reading First Impact Study was originally planned as a randomized control study, in which eligible schools from a sample of districts were to receive Reading First funds or become members of a non- Reading First control group. The approach was not feasible, however, in the 38 states that had already begun to allocate their Reading First grants before the study began. Furthermore, in the remaining states, randomization was counter to the spirit of the Reading First Program, which strongly emphasizes serving the schools most in need (p. 7)."

The delay led to an inability to design the study and address the evaluation targets specified in the law.Had the evaluation been ready to be implemented in the early stages of Reading First the amount of critical information that we would have learned would have increased dramatically.

Some of the evaluation targets delineated above would have required a different design and focus than that which was employed in this impact study. For example, the evaluators would have to carry out systematic measurement of proficiency in all domains of reading development noted in the legislation and to ensure the use of the same measures across all Reading First sites – not just time spent in instruction for each of the domains. Moreover, tests of the effects of specific materials (assessments, programs) on reading outcomes would require designs capable of showing causal effects.

Unless the final Impact Study report addresses the required analyses and measurements articulated in the legislation and listed above, it is not the study that the Congress intended.A significant amount of information will have been lost. More importantly, it will not be possible to answer a fundamental question:For which students are which instructional materials (programs) most beneficial under well defined conditions (professional development, implementation fidelity, and so on).

This is not a trivial issue because many of the measurements and analyses that we felt were essential to the accurate interpretation of any findings, positive or negative, do not appear to have been carried out. Again, to be fair, this was due to the delay in getting started.

To elaborate on a couple of points I made earlier, the current design used to test the evaluation questions identified in the Interim Reportis not optimal for addressing task number 5 (see above) mandated in the legislationwhich would be best examined using a randomized controlled study.

In the Interim Report, specific programs and materials were not identified although it is well known that core programs will differ in their alignment with SBRR, the content presented, their scope and sequence, and relative coverage and emphasis on each of the five domains of reading subsumed within the Reading First legislation.

Tim Shanahan has pointed out that while differences in the effects of core programs would have been identified, the frequent revision of commercial programs based on state adoption requirements would render any findings obsolete following the next adoption.

This is a legitimate argument but one I disagree with.Examining program-specific effects on outcomes would have clearly yielded objective information for a publisher to use in any future revisions. The evaluation we included in the congressional language could have could have also provided critical information on whether particular programs were more likely to be implemented with fidelity, and whether particular program assessment and instructional characteristics were influential in producing specific student outcomes. This would have been possible given that every state and participating LEA identified the specific programs they were using.

Also recall that the majority of core programs adopted by states and schools had not been tested for effectiveness because of a congressional decision to change that criterion to the softer "based on SBRR" criterion.Addressing task 5 had the potential for providing effectiveness information for each program whether they are commercial basals, supplemental programs, or comprehensive school models as they were implemented during the initial Reading First six year term. Many of these programs had made claims of effectiveness.It is important to test those claims.

What I continue to argue to the consternation of commercial basal program publishers and private vendors is that at some point, education must become more serious about holding commercial publishers of educational materials accountable for the effectiveness of their programs. Establishing effectiveness and defining explicitly the conditions under which programs, approaches, and strategies are effective takes the basic effectiveness variable off the table.A focus can then be placed on determining why programs are effective in one context and not in others. This moves the analysis to issues related to teacher familiarity with, and competence in providing the program, and the essential implementation factors that can make or break a program's effectiveness – proven or not.

Let's cut to the chase –Are the core programs adopted by states and districts actually effective and under what conditions and do the effectiveness data support market claims?

We don't know. Remember, and I am repeating myself, the majority of core reading programs adopted by states and districts had never actually demonstrated effectiveness. That was not required in the law following Congressional revisions of the initial language. The programs only had to be "based on SBRR". For years, commercial programs have given lip service to determining the effectiveness of their products and had the evaluation addressed all of the targets mandated in the law, more information would have been available to determine whether these claims were valid.

Question 3: You have made the point that this impact evaluation cannot draw strong conclusions about the lack of significant differences between Reading First and non-Reading First schools on reading comprehension outcomes. Can you explain this in more detail?

Tim Shanahan said it best when he stated that "the comparisons were not Reading First with Non-Reading First schools, but Reading First with less-Reading First". There are several reasons for this.

When Bob Sweet and I crafted the legislation, we knew that SBRR would be a foreign concept to many as would using continuous data collection to differentiate instruction.For this reason, substantial funds (20% of the Reading First allocation) were to be used for professional development. However, we stipulated that these funds could be directed toward both Reading First and non-Reading First schools.

In some cases, states added their own funds to augment Reading First professional development programs. For example, Alabama and Arizona were among several states that dedicated substantial state funds to ensure that "Reading First Like" programs were implemented in all elementary schools.

Programs were created in several states by the federal government to provide professional development and technical support to non-reading first schools to develop their capacity to implement Reading First assessment and instructional criteria. An analysis reported by Tim Shanahan indicates that 60 percent of Reading First and non-Reading First schools were following the same curriculum by the third year of implementation.Shanahan also reported that school districts like Aurora, IL and Syracuse, NY required all of their non-Reading First schools to adopt the same reforms as Reading First schools using local money.

It does not appear that the current impact study specifically addressed this overlap in the evaluation, although the final report may present these data.In many cases, one would expect Reading First and non-Reading First schools to be more similar than alike in their impact on reading outcomes.Again, and I am beating a dead horse, the comprehensive external evaluation mandated in the congressional language would have allowed for fine-grain control and analysis of these confounds.

Question 4: Some commentators have suggested that the absence of significant differences between Reading First and non-Reading First schools was a result of too much emphasis on basic skills in Reading First schools. Is this observation accurate?

If I understand the data presented in the interim report correctly, it is not accurate.Ironically, the interim report presents data that the majority of instructional time in Reading First schools was focused more on comprehension than phonics. First grade classroom instruction included 21.4 minutes on phonics and 23.6 minutes on comprehension.Second grade classroom instruction included approximately 29.2 minutes of instruction on comprehension and approximately 14 minutes on phonics. Thus the conclusion that comprehension outcomes were not significant because of a focus on basic skills does not seem to square with these data.

In addition, the results indicated that less emphasis was placed on phonemic awareness (a basic word level skill), fluency and vocabulary than comprehension and phonics with vocabulary being absolutely critical for comprehension as is background knowledge. – But you can't engage vocabulary to determine meaning unless you get the words off the page accurately and fluently. As the IES director pointed out that the emphasis on comprehension, which again was greater than an emphasis on phonics, may not have been structured enough.I would agree that this is the more defensible possibility – Time does not guarantee effective instruction. It may also be the case, that students did not engage in sufficient wide reading.It could be that more instructional time was required to demonstrate effects on comprehension But remember, it is likely that many Reading First and non-Reading First school were implementing the same approaches leading to null findings.

Question 5: Some commentators have indicated that one explanation for the lack of significant differences is that scientifically based reading instruction does not work.Do the results from the interim study support that conclusion?

No, that conclusion is scientifically incorrect, and how one reaches such aconclusion from a study yielding null results escapes me.In their reviews of the research, both the NRC In their reviews of the research, both the NRC Report on beginning reading and the NRP report indicated that the elements of reading development that were addressed in Reading First were essential for proficient reading and that the effectiveness was mediated by systematic and direct instruction in the components.

The effectiveness of SBRR programs evaluated using appropriate experimental designs and methods have been replicated several times. In fact, a study published in the highly regarded Reading Research Quarterly by Mathes et al. and which won the Albert J. Harris Research Award from the International Reading Association found that explicit and comprehensive reading instruction was significantly effective in dramatically reducing reading failure. There are many studies that support this conclusion.

Moreover, replications of the NRP Report results have also stood up to scrutiny when the replications are designed appropriately. For example, two recent invited papers published in the Elementary School Journal concluded that there was no significant difference in the effects of systematic versus unsystematic phonics instruction as reported in the NRP. However, a re-evaluation of these two studies published in the archival and peer reviewed Journal of Educational Psychology, confirmed the findings of the NRP while pointing out why the ESJ design was not appropriate for the question under study. The JEP study was not an invited paper, was submitted through the formal peer review process, and underwent rigorous evaluation prior to publication.

Given that instruction guided by SBRR has been consistently found to be effective, the questions that must be addressed is whether the core programs used in Reading First schools were completely aligned with SBRR, whether teachers were adequately prepared to assess and teach the content, and whether the programs were implemented with fidelity. These issues are especially important for a study yielding no differences, or null results. There is an axiom in research that the null hypothesis cannot be proven. What this means for the RFIS is that if the results show no differences by virtue of the intervention, it does not mean that the intervention is not effective. The task is to determine why the null results might have emerged.

The IES Director pointed out additional reasons that could explain the results that have merit.He suggested that the instruction may work but it was not sufficient enough to significantly impact comprehension even if it improved decoding skills, reading fluency, and vocabulary. This is indeed possible.Had all of the evaluation targets mandated in the law been specifically addressed, data that could inform this possible interpretation would have been available.

Question 6: Some have argued that the Reading First schools should have shown a greater impact on comprehension because more time was allocated for reading instruction in Reading First schools. Does this conclusion have merit?

I don't believe so but I certainly stand ready to be corrected.The interim report presented data that the actual time spent in daily instruction in the five reading domains in Reading First classrooms was in the 59 minute range and about 50 minutes in the non-Reading First schools. While this is statistically significant, I have my doubts that 9 minutes of actual daily instruction will produce a great deal of difference.

Moreover, it is surprising that the instructional time in these schools is far below what state level Reading First implementations require, which hovers around the 90 minutes of core reading instruction and an additional period (usually 20- 40 minutes of supplemental instruction. So here we have a situation involving two groups of schools selected because they are high poverty and low achievement. One group is funded through Reading First; the other group is not. The Reading First implementation results in 9 minutes of additional core instruction and there is no mention in the report of the supplemental instruction that should have increased time on task to approximately 120 minutes. Understanding these null results requires a close examination of why the implementation was not more effective in these schools.

Some in the press have emphasized that the weekly total of instructional time favors Reading First schools with close to an hour in additional instruction, but time spent in daily instruction is a more compelling indicator of instructional intensity. For high risk students in high risk schools, an additional 9 minutes of daily core instruction is not going to make a dent in their development of reading skills. So all we know is that 59' of instruction (presumably in an unspecified basal) is no better than 50' of presumably unspecified instruction. In Reading First schools,a few more minutes were spent in phonics instruction, but even this amount of time in explicit phonics is not adequate for students at risk for reading problems, especially when the total amount of time in other components is insufficient.

The Sample is of great concern.It is important to understand the nature of this variation in instructional time between the schools in the study sample and those that were not representedin the study as it applies to sampling issues. This issue must be addressed aggressively because a great deal can be learned to improve the program from a more comprehensive analysis of allocated instructional time.

Question 7: In your view, has the press provided accurate interpretations of the results presented in the interim report?

Some reports have been fair and others not so fair. To be sure, helping the public understand the details we have discussed in the interviews is difficult given the complexity of the issues surrounding research design, sampling issues, departure from the evaluation scope mandated in the law and so on.

That said, I would have liked to see more in the press informing the public that any conclusions drawn must be interpreted with a significant degree of caution and also provided some facts that would have explained the need for caution. Many of these caveats are easy to explain, but somehow they never seem to be emphasized when reporting on educational research findings.

In some of the press accounts there were the usual "hooks" that I suppose are meant to titillate the reader. For example, one press account reported that thousands of students were involved in the study which is not the case. The unit of analysis in the study, I believe, is the schools, so power is determined by the number of schools in the sample, not the number of students. As sections of the report make clear, the number of schools is not adequate to assess many questions involving differences with the two groups of schools evaluated for the study. This problem is puzzling since the budget allocated for the evaluation should have been more than adequate to expand the sample size.

Some outlets referred to Reading First as a "phonics based" program which it is clearly not (read the legislation for goodness sake). Some outlets continue to make this mistake which might be because they want to kick up the flames in the "reading wars". It is hard to understand the press's continued love affair with the phonics-whole language dichotomy other than its eye-catching appeal or that they believe the readership cannot understand that reading development requires the integration of several essential reading skills.

This may be a wacky analogy, but maybe it can be explained like the performance of an eight cylinder car engine. When all cylinders are firing, performance is good. If one or more cylinders are misfiring, then performance suffers.Each cylinder can be thought of as a necessary but not sufficient reading component that must be integrated with all the others to produce reading proficiency.

There was the usual mention that Reading First was based on the NRP findings, which, in part, it was.However, the NRP is frequently characterized as a "phonics based" Report which of course it is not (read it) and a "Bush Initiative" (it was convened during the Clinton Administration).

You also see in all press accounts a reference to the Reading First "Scandal" with blasts from Miller and Obey charging that the program was rife with conflicts of interest and conflating this inaccurate statement with the null findings in the interim report. The press has a responsibility to report these statements by congressional members.But the press should also report there has been no finding of actual conflicts of interest by the OIG or the Justice Department and that the specific allegations of individuals with particular programs have not been supported by the OIF investigation.

Question 8: Is it your perception that there is an unusual degree of venom directed toward Reading First from members of the reading community and will the interim report fuel this antagonism?

There is no doubt that those opposed to Reading First and SBRR will be energized in their protests against the program given the findings presented in the Interim Report. I doubt that many will have read the report but will pay more attention to the press accounts which many, as pointed out earlier, do not provide the necessary interpretation caveats.

It is hard not to be taken aback by the degree to which many in the reading community want to see Reading First fail, particularly when the programs, methods, and approaches advocated by this constituency or that constituency have never come close to any systematic impact evaluation of the scope implemented with Reading First.

Interestingly, similar attacks have not been made against the Title I progam.Somehow the press and the congress has been silent on the fact that the Title I program, which has served as a massive entitlement funding source to districts and schools, is allocated approximately $3 million dollars a year to ostensibly "evaluate" a $15 billion dollar program – and that evaluation only measures whether money gets from point A to point B, not whether it is effectively helping students who are reading significantly below grade level. Again, no education program to date has ever undergone any evaluation approaching the scope of the current Reading First Impact study and certainly has come even remotely close to the comprehensive evaluation we actually required in the Reading First legislation (see Section 1205).

Question 9: Are there other factors that we should take into account when interpreting the interim report?

Indeed, in all the angst surrounding the Reading First program it seems we have not attended to the fact that it is a relatively new initiative characterized by massive implementation challenges and with substantial growing pains that would characterize any very complex initiative implemented in the public schools. If you visit as many Reading First schools as I have and talk with as many Reading First directors and teachers as I have, the complexity associated with implementation of both the policy and the instructional programs becomes readily apparent. It is more the rule than the exception that during the first two years of Reading First implementation in districts and schools, teachers were first learning to understand, administer and use the results of assessments to inform instruction.

As they were learning thesenew concepts,they were also taking part in state reading academies to learn more about  the foundation of SBRR (in 5 areas of reading in k-1, in 4 areas  of reading in 2-3).

In addition, as they were learning and using new assessments and taking part in professional development academies and workshops, they were simultaneously learning how to use a new approach to instruction and how to integrate core program instruction with additional interventions when required to meet individual student needs.This was done at the same time they were learning about center activities, grouping students for instruction and aligning and using supported classroom libraries.

It is important to ask whether any program that has added this amount of new learning to a teacher's other responsibilities including going to IEP meetings, attending parent conferences, preparing for their instruction in math, social studies and science, serving on school wide committees and a host of other tasks could demonstrate substantial gains after only two years. What is amazing is that despite this unbelievable load, Reading First teachers and their leaders rose to the occasion and have done and are doing a superb job. Also note that the GAO and OMB reports show that they feel that this job is essential and that it is having a major impact.

Question 10: What types of additional information would be helpful in drawing conclusions about the effectiveness of instructional interactions that take place within Reading First and non-Reading First schools?

The quality of the implementation of both policy and the instructional approaches emphasized by the policy must be measured. There are many questions that would have to be posed and answered to obtain a clear picture of implementation fidelity, but a few come immediately to mind.

First, it would be informative to understand whether the quality of the infrastructure (state and district leadership, professional development, building level management, assessment and evaluation capacity, etc.) actually reflect what was described in state and district grants.The fact is, it takes time to build the infrastructure at the state, district, and school levels that is essential to support the implementation of the program. We need to better understand how different Reading First schools managed this process.

An understanding of the quality of the infrastructure and its relationship to implementation fidelity is critical in interpreting outcome data.Schools could be grouped according to high and low quality infrastructure (this sounds easier than it is) holding programs constant to get some idea of how infrastructure impacts outcomes.

Third, outcome data should be collected from schools using different programs, holding constant the quality of the infrastructure to help determine the conditions under which programs reading proficiency.

There are many more issues that have to be addressed to obtain a comprehensive understanding of what has to be done to implement a policy and its constituent programs successfully in the complex world of schools. There is a tremendous amount we can learn from this Reading First Impact Study. It certainly points out the complexity of attending to the multiple confounds that can reduce interpretation of the data.It provides information about the program and the evaluation itself that can lead to improvements in both.It reflects the very hard work and excellent abilities of the evaluation team and their commitment to carry out the best evaluation possible given the amount of time to design, implement, analyze the data, and report the results. The Reading First program has been in existence only a short period of time but we have already learned where many of its features can be improved. The evaluation carried out reflects a commitment to evidence based education and accountability.Hopefully it will inspire policymakers and scientists to carry out equally robust evaluations of policies, and educational and instructional programs.

In closing, let me say this.Reading First is the largest concerted reading intervention program in the history of the civilized world. Most importantly, it is one of the few Federal State-Grant Programs to undertake a rigorous impact evaluation. We set aside $25 million dollars per year for six years to carry out the most comprehensive evaluation of an education program to date. Unfortunately significant delay in designing and implementing the evaluation did not allow it to address many of the evaluation targets specified in the legislation (Section 1205), making interpretation of the data difficult and reducing the potential to inform specific improvements. That said, most evaluations of formula funding programs can only tell us how the money was spent, nothing about impact, theory of action, or why we might not have gotten the results that we expected. The Title I program for example costs the taxpayer $15 billion dollars per year with $3 million dollars set aside for an evaluation of whether funds get from point A to point B.Title I has never been evaluated using appropriate designs and methods to determine the extent to which the program effectively helps students who are reading significantly below grade level.Now that is a scandal!

Putting aside the current impact study's shortcomings and its departure from the scope of the mandated comprehensive evaluation in the law, the key is to use any trustworthy information from this evaluation to understand what worked and what didn't so that the intervention and its implementation can be improved and the desired impacts obtained.

This is not a cause for mourning and political opportunism, but a cause for deliberation and careful consideration of all the possible explanations - ineffective treatment, poor implementation, diffusion of funds, active treatment in the control condition, and many other factors.

It is also a time to be very careful in drawing conclusions from this study and to be very clear about its limitations in making inferences about the success of the policy and the success of the instructional model emphasize in the model. It has been the bane of education to implement policy with very little research foundation and very little effort at rigorous evaluation. Change is hard!

This said, my specific answer to your first question is yes, my initial predictions were realized and there were other concerns that emerged in my review of the Interim Impact Study Report. I have no doubt that IES and the contractors who carried out the study did the best they could to conduct a fair and objective evaluation and I am impressed by the hard work that went into it.It is very possible that the final impact evaluation report will address all of the concerns I identify here.If it does, I believe we will have learned a substantial amount of information that can explicitly guide improvements.If it doesn't, we still have learned a great deal.It is also the case that some of the shortcomings of the evaluation may be because we did not write the legislation precisely enough such that one could readily understand the absolute necessity to carry out a comprehensive impact evaluation. I think that we did write Section 1205 clearly within the constraints of conference committee negotiations on particular language used, but if not that is another improvement that must be carried out.

Dr. G. Reid Lyon, an internationally recognized authority in educational issues announced the development of SYNERGISTIC EDUCATIONSOLUTIONS (SES) a consulting company to advise in the implementation of evidence-based assessment and instruction practices, professional development programs, development of education policy at local and state levels, and the development of assessment and evaluation programs for colleges and departments of education preparing for regulatory and accreditation activities. Prior to his most recent position as Executive Vice President for Research and Evaluation at Higher Ed Holdings, Dr. Lyon was the Chief of the Child Development and Behavior Branch within National Institute of Child Health and Human development (NICHD) at the National Institutes of Health (NIH) from 1992 until 2005. In 2006 Dr. Lyon was named one of the ten most influential people in American education during the last decade by the Editorial Projects in Education Research Center (Education Week) for his work in ensuring that scientific research occupies a central role in educational practices and policy.He also currently serves as a distinguished research scholar in the school for Behavioral and Brain Sciences and the Center for Brain Health at the University of Texas-Dallas.The website address for SES is

Published May 5, 2008

June 4, 2008 An On–Going Discussion with Reid Lyon


May 5th, 2008

Michael F. Shaughnessy

Senior Columnist

Career Index

Plan your career as an educator using our free online datacase of useful information.

View All

On Twitter

RT @randahendricks: OK, brilliant fellow educators, please share with me all the great ways you're using #GoogleClassroom. 1,2,3,GO! #sscha

9 minutes ago

$10k degrees in Iowa? If Gov Branstad gets his way, they'll be an option #highered #edchat #education

5 hours ago

New study of attendance in CA shows black students more likely to be truant than homeless kids #edchat #education

5 hours ago

On Facebook


Enter your email to subscribe to daily Education News!

Hot Topics