Edu-Thinker Influence and Expertise Rankings 2024

Edu-Thinker Influence and Expertise Rankings 2024

Policy Report #24-AApril 2024


Dr. Christopher Lubienski, Dr. Joel Malin, Paul Faulkner

Key Takeaways
  • While think tanks and similar organizations often have considerable influence, their expertise - and the qualifications of many individuals working for them - is sometimes lacking.
  • There is not a clear correlation between influence and expertise, and several prominent organizations have little discernible expertise in areas in which they promote policy ideas.
  • Policymakers and the media need to be aware of the ideological agendas of knowledge brokers claiming to represent the research.
  • Policymakers and the media need to do a better job of vetting the expertise of policy influencers.

Think tanks exert significant and growing influence over a range of public policy issues, setting priorities and providing research to advance agendas through the policymaking process (Lerner, 2018), including through the news media (Haas, 2007) and the philanthropic world. But of course few if any think tanks are objective arbiters of evidence, and it is increasingly difficult in many sectors — often by design — to distinguish those organizations that primarily produce evidence from those that simply advocate for an agenda. Indeed, many organizations working in the policy arena are tied to specific agendas, typically through their funding sources. Yet their growing influence in sectors such as education raises the question as to whether their prominent position in policy discussions is due to their ability to produce and analyze evidence or to their effectiveness in engaging media, policymakers and philanthropists (Malin & Lubienski, 2015). Said more simply, as private entities exert more influence over public policy in education, is their influence based on research expertise or media savvy?

“Think tanks’ growing influence in sectors such as education raises the question as to whether their prominent position in policy discussions is due to their ability to produce and analyze evidence or to their effectiveness in engaging media, policymakers and philanthropists.”

Of course, the answers to these questions will vary by organization, as well as by the types of individuals within each of those organizations (whether think tanks or adjacent entities). Here, we sought to understand the level of public influence exercised by think tanks (and think tank-type entities) and individuals at those and similar organizations involved in education. This part of the analysis draws upon similar rankings — most prominently, Rick Hess' Edu-Scholar Rankings1 of influential university-based (as opposed to think-tank based) scholars. In some regards, Hess'1 ranking effort holds university professors to metrics that are more important for think tank researchers by examining, for instance, their media citations or mentions in the Congressional Record, rather than factors more important to the researchers at universities — issues such as doctoral training, grant awards, or publishing in prestigious journals (Alperin et al., 2019).

However, our new project adds a critical dimension by not only turning the tables and measuring the influence of think tanks, but also contrasting that with measures of their expertise on education issues. Doing this allows readers to compare the influence of these think tanks and their education policy specialists with their expertise on education issues. This contrast offers some useful insights into the extent to which think-tankers’ assertions regarding evidence are matched by their capacity to understand, analyze and interpret the evidence. As policymakers call for “evidence-based” or “research-based” policies, it is crucial to consider whether those influencing policy know what they are talking about.


So this project shifts the gaze on to non-university-based experts at think tanks and advocacy organizations, applying measures of scholarly expertise, while at the same time comparing that to their public influence. While university-based researchers are expected to be experts in producing research, but not necessarily expected to exert influence in public debates, we pose the question: are the people at think tanks who are expected to have some influence actually experts? While prior projects of a similar nature have included only a limited number of think tanks (Greene, 2022), our current project is uniquely designed to focus on a broad range of primary players in education policy within the think tank and advocacy organization arena.

Click here to view the full results and methods.

In total, this project examines 30 different think tanks or advocacy organizations and 162 specialists at those organizations. Each individual is assessed via four metrics related to public influence and four metrics reflecting scholarly expertise. Specific metrics are weighted to garner a total score for each individual in both public influence and scholarly expertise. Organizational scores are then calculated through a sum of the individual scores at that organization. All data was collected between January 29 and February 1, 2024.

Organizations were included if they were US-based think tanks with a significant emphasis on US education broadly, that were not university-affiliated, and that had eligible individuals on their team; in addition, 8 education advocacy organizations were included that generally met the stated criteria. State-level organizations, organizations focused on a singular, narrow issue within education, and consulting groups, watchdogs, or strict research institutes were excluded, as were organizations that did not appear to engage in policy or research work.

Within the organizations in our sample, individuals were included if they held permanent, influential roles directly affiliated with the organization's domestic education policy research work at the federal level (such as fellows, vice presidents, directors, or assistant directors); the top executive was also included for organizations who specifically focused on education. Individuals holding a regular, full-time academic appointment or emeritus status at an academic institution were excluded, as were those with less influential or auxiliary roles within the organization (such as analysts, associates, managers, or consultants). Individuals who were affiliated with more than one of the included organizations in eligible roles were only assigned to one primary organization in order to avoid duplicate records.

The Public Influence total is the sum of the Social Following (divided by 1000), Book Popularity (divided by 10), News Attention (divided by 10), and Congressional Mentions scores. Scores are rounded to the nearest whole number.

  • The Social Following metric is the total number of Twitter/X followers.
  • The Book Popularity metric is the total number of Amazon ratings on authored nonfiction results related to education in the Books category, excluding edited volumes.
  • The News Attention metric is the total number of results mentioning the individual related to education within the Access World News database in which their affiliated organization also appears.
  • The Congressional Mentions metric is the total number of times the individual appeared in the Congressional record for their professional work related to education since 1995.

The Scholarly Expertise total is the sum of the Citation Impact, Book Publishing (multiplied by 2), Journal Publishing (multiplied by 2), and Degree Attainment scores.

  • The Citation Impact metric is the h-index as calculated by Scopus.
  • The Book Publishing metric is the total number of authored nonfiction results related to education in the Books category that were published by a university-affiliated press, excluding edited volumes.
  • The Journal Publishing metric is the total number of articles as returned by a Scopus search published by the individual in 10 identified top journals relevant to the field of education policy studies.

The Degree Attainment metric assigns points to individuals for the highest degree they have attained in any field (either 1 point for a master’s, 3 points for a specialist, or 5 points for a doctorate). Individuals received additional points (either another 1 for master’s, 3 for specialist, or 5 for doctorate) for their highest attained graduate degree in education or an identified field related to education policy, with doctoral credit given for dissertations dealing significantly with education.


At the organizational level, the Manhattan Institute leads in public influence while the Learning Policy Institute garners the highest level of collective scholarly expertise. Only one organization landed in the top 5 for public influence as well as scholarly expertise: the American Enterprise Institute.

On the individual list, Christopher F. Rufo of the Manhattan Institute leads in public influence while Na'ilah Suad Nasir from the Learning Policy Institute holds the highest level of scholarly expertise. Two individuals landed in the top 10 for public influence as well as scholarly expertise: Diane Ravitch and Frederick M. Hess.

In a perfect world of research-based policy, where an organization’s expertise correlates with its influence in public policy matters, we would expect a tight relationship between research acumen and influence — the greater an organization’s (or individual’s) expertise on an issue, the greater their impact. But in the real world of education policy, clouded by private funding, advocacy and agendas, it is clear that this is too often not the case. There is often a notable disconnect between expertise and influence. While some organizations (and individuals) exhibit comparable levels of expertise and influence, quite often this project illuminates cases where influence is clearly not matched by research expertise. For instance, when looking at various think tanks and associated organizations, the Manhattan Institute and the Center for Education Reform score relatively low on expertise but have a huge influence on education issues. On the other end, the Urban Institute has substantial expert capacity, but not as much impact as that might predict.


Because they are easy to understand, rankings are popular but limited devices for understanding complex issues where different factors may be at play. This endeavor is thus also limited and by no means intended to create the definitive picture of education policy thinkers and the think tanks with which they work. Still, the fact that this project offers two separate rankings provides some insights into the often disjointed nature of policy formation in the US — with some influential think tanks’ relative lack of expertise suggesting a divorce between evidence and policy. Results such as these perhaps do more to raise questions than supply answers. One may wonder, for instance, in lieu of expertise, what specific features might help to explain the outsized influence of certain organizations. Nevertheless, it is hoped that these rankings open up a conversation around the role of think tanks and advocacy organizations in domestic education policy work.

1This language was changed at the request of Education Week to reflect the fact that while Hess’ rankings are hosted on their site, EdWeek does not play any other role in his rankings.