주요 콘텐츠로 건너뛰기

귀하의 브라우저가 완벽하게 지원되지 않습니다. 옵션이 있는 경우 최신 버전으로 업그레이드하거나 Mozilla Firefox, Microsoft Edge, Google Chrome 또는 Safari 14 이상을 사용하세요. 가능하지 않거나 지원이 필요한 경우 피드백을 보내주세요.

이 새로운 경험에 대한 귀하의 의견에 감사드립니다.의견을 말씀해 주세요 새 탭/창에서 열기

Elsevier
엘스비어와 함께 출판
Connect

Evidence-based assessment, equity and opportunity

2023년 9월 13일

저자: Margaret Sheil, AO, FAA, FTSE

Quote from Prof Margaret Sheil, AO, Vice-Chancellor and President of QUT, Australia: “The reason metrics and other indicators have such widespread currency is not just because they provide the illusion of accuracy; it is because they are so convenient.”

When it comes to assessing academic career progress, metrics can be deceptive, especially when evaluating individuals from diverse backgrounds.

My career has transversed scientific research, university teaching, academic leadership, and research policy at three very different universities and in government as chief executive of the Australian Research Council (ARC) — the major national agency for funding non-medical research in Australian universities. I recently led a review of the ARC 새 탭/창에서 열기 for the current government more than 10 years after I left that role. When reflecting on my career, I am regularly asked how I have simultaneously presided over meritocratic institutions or processes whilst strongly supporting initiatives designed to increase access, success and representation of underrepresented groups.

해당 기사는 글로벌 이슈에 대한 연구 및 학계 리더들의 새로운 관점을 확인할 수 있는 월간 발행물인 Not Alone 새 탭/창에서 열기 뉴스레터에서 발췌한 것입니다. 

Merit and opportunity

Early in my career, the focus was women in science; later initiatives supported First Nations students and staff. Each of my institutions, and indeed Australian government policy 새 탭/창에서 열기, has sought to support students who have not historically had access to higher education. The tenor of these inevitable questions presumes that merit can be objectively assessed, that equality is the same as equity, and that promoting excellence is in tension with access. The latter assumption underpins the recent US Supreme Court decision to overturn the legal basis for affirmative action programs in employment and admissions — running counter to President Johnson’s declaration 새 탭/창에서 열기 when he introduced affirmative action in the 1960s: “We must seek not just equality as a right and a theory but equality as a fact and result … equal opportunity is essential but not enough.”

That principle has not changed: talent is broadly distributed; opportunity is not. So whether we are selecting for admission to our universities or recruiting staff, we must not start with the assumption that each has the same opportunity to develop or demonstrate their ability.

Women in research

My interest in equity started early in my career. Even by 1990, women in tenure-track roles in university science departments in Australia were rare. As I was striving to compete for research funding and publications in the top journals and establish myself as an independent scientist, I initially resented the frequent calls on my time to sit on recruitment and selection panels to provide gender input (not gender balance, as we did not have the numbers). Often the only woman in the room, I became acutely conscious of, and ultimately more outspoken about, the barriers, inequities and unconscious biases when it came to assessing the “merit” of various individuals.

When I later joined the university-wide tenure/promotion committee, I naively thought that if we presented more data and more publication metrics on candidates, these biases would not persist. However, they usually did but in different, seemingly more objective forms. I found that efforts to improve metrics often had the opposite effect. For example, substituting total numbers of publications and citations with a measure such as h-index, which considers cumulative performance over time, has exacerbated inequities for those who have had non-linear or interrupted careers. These are mostly women with caring responsibilities but also those who have taken time out to commercialise their research or spend time in an industry partnership.

My general observation is that the more seemingly precise or informative a new indicator, especially if applied as a solution, the less scrutiny there is of the quality and opportunities for individuals to whom it is applied.

Because of these challenges, I frequently heard as a young scientist that “if you take a break, you can never return.” Fortunately, we now have numerous amazing role models to counter that claim. In my own discipline of mass spectrometry, Prof Dame Carol Robinson, FRS 새 탭/창에서 열기 is an example of someone who has had a non-traditional yet outstanding career journey and success.

Biases in academic recruitment and promotion adversely impact women and minority groups in academia, while the lack of diversity further hampers academic achievements and ultimately diminishes the potential societal impact. Apart from being the right thing to do for affected individuals, there is also ample evidence demonstrating that diverse teams perform better 새 탭/창에서 열기 and produce better outcomes. Diverse teams in research institutions improve research methodology 새 탭/창에서 열기 and question development. For example, there is an abundance of research and policy decisions based on data or evidence generated from men 새 탭/창에서 열기, with findings then being extrapolated to women without considering differences between the sexes.

At the Australian Research Council, when confronted with the overall lack of success for women, we made changes to encourage reporting and equitable treatment of those who had career breaks and to encourage more women to apply. These resulted in some progress, but our recent review 새 탭/창에서 열기 found something paradoxical: Over the ensuing 12 years, in an effort to increase fairness and objectivity, much more detail was added to the career descriptions to the point that reviewers paid less and less attention, thereby potentially reducing the impact of this important change over time.

Alongside the changes in presentation of research records, we also analysed in more detail the number of grants awarded to men and women applicants at different career stages. This yielded some unexpected results, including that the difference in overall success rates (the number of applicants awarded grants compared to those who applied) was greatest at the earliest career stages (0-5 years post-PhD). Therefore, we concluded these differences could not just be due to “women taking a career breaks or not being promoted at the same rate” but rather that men were given more sponsorship and opportunities for first authorship, presentations at conferences and more challenging or different projects (e.g., quantitative versus qualitative) that then manifested seemingly better publication metrics from the outset of their careers. In contrast, the scores for project quality were not correlated with gender of the applicant. Our conclusion was similar to that expressed in the San Franscisco Declaration on Research Assessment (DORA) 새 탭/창에서 열기, which notesthat funding agencies should“be explicit about the criteria used in evaluating the scientific productivity of grant applicants and clearly highlight, especially for early-stage investigators, that the scientific content of a paper is much more important than publication metrics or the identity of the journal in which it was published.”

Metrics and measurement

During my time as CEO, the ARC also developed and implemented a research evaluation system for Australian universities, known as the Excellence in Research for Australia (ERA) initiative 새 탭/창에서 열기. ERA aimed to use metrics and other evidence to inform expert assessments of the standing of individual disciplines within each institution. ERA was underpinned by detailed consultation and testing of the types of indicators that were possible proxies for quality. These ranged from traditional publication metrics in the natural and physical sciences to peer review in the humanities and social sciences, evidence of applications in professional fields (e.g., conferences or reports), and non-traditional outputs in the creative and performing arts. A challenge for a national research measurement exercise is that it is impossible to not influence what you are trying to measure, as Heisenberg demonstrated well over 100 years ago. When research metrics are applied to the consideration of individual academic performance, academics will respond to the measurement system and may change their behaviour. A detrimental effect was a focus on publications in international journals and arguably less focus on research relevant to the Australian or local context, interdisciplinary research and research with longer time horizons. On the positive side, the overall quality of Australian research improved. However, the benefits relative to the burden of the exercise have also diminished over time. Our recent review recommended to government that the effort would be better directed elsewhere. We further recommended against any attempts to replace the current evaluation exercise with metrics alone, however tempting and convenient that option may be.

Indicators here to stay?

The reason metrics and other indicators have such widespread currency, however, is not just because they provide the illusion of accuracy; it is because they are so convenient: they save time and reduce the cost of assessment. Time, as we know in the increasingly busy world of academia, is always at a premium. Undertaking detailed peer review of publications and grants or serving on promotion and selection panels is typically work done over and above other obligations in teaching, research and university or professional service. Tools that enable those roles to be done more efficiently, and hopefully more expertly, are attractive and inevitable. Similarly, tools that compare different parts of the university are seemingly attractive to busy university leaders. I use them in that way too, but I am very careful about not using them indiscriminately when applied to individuals. The irony of leaders who complain about the vagaries of the indicators used by international ranking agencies yet at the same time utilize similar methods within their institutions in funding allocations is not lost on our staff.

Looking beyond the traditional indicators

My own institution has recently committed to DORA and commenced work on the implementation of principles that align with best practice in research assessment. We recognise the inevitability and increasing attractiveness of more sophisticated search and artificial intelligence tools, each of which may introduce discrimination and biases affecting different groups and individuals, especially those with more complex experiences and careers.

Given a long-standing interest in identifying and rewarding those who will be good mentors, I asked the International Center for the Study of Research (ICSR) team at Elsevier whether this could be gleaned from publishing behaviour. For example, a good mentor may have a lot of junior co-authors and, depending on the field of research, choose to put their junior co-authors in senior positions. An individual who may be good at promoting diversity will have more diverse co-authors, and so on.

Preliminary work undertaken with ICSR was aimed at developing a dashboard that provides a more holistic picture of individuals. A focus on highlighting attributes that are not normally recognised — like mentoring, social engagement and collaboration — may assist in encouraging diversity and achieving equity in all our collective endeavours. Still, there is more work to be done with many challenges ahead, not the least of them the rise of generative AI and its impact.

If indicators are here to stay, we need to increase awareness and education of those who use them, identify better ways to account for the complexity of diverse career trajectories and the quality and impact measures relevant to different disciplines and cohorts of staff — and be mindful of the biases inherent in any data set. The Hippocratic Oath as it applies to physicians should also apply to the use of metrics: they must first do no harm.

Prof Margaret Sheil, AO, FAA, FTSE

Prof Margaret Sheil commenced as Vice-Chancellor and President of Queensland University of Technology in 2018, having previously been Provost at The University of Melbourne (2012-2017) and Chief Executive Officer of the Australian Research Council (2007-2012). Professor Sheil has been an academic in chemistry and held a number of senior roles at the University of Wollongong, including as Dean of Science and Deputy Vice-Chancellor (Research). She is a Fellow of the Australian Academy of Science (AAS), Australian Academy of Technology and Engineering (ATSE),the Royal Australian Chemical Institute (RACI) and Fellow of the Australian and New Zealand Society for Mass Spectrometry (ANZSMS). Professor Sheil is Chair of the Board of the Queensland Museum Network, Deputy Chair and lead Vice Chancellor for Research of Universities Australia.In 2023 Professor Sheil chaired the “Trusting Australia’s Ability: Review of the Australian Research Council Act 2001.”

기여자

Prof Margaret Sheil, AO, FAA, FTSE

MSAFF

Margaret Sheil, AO, FAA, FTSE

Vice-Chancellor and President

Queensland University of Technology