Scientific research evaluation has become increasingly important in the interactive development of science and society. However, there is still a long way to go for China’s scientific research evaluation, which is still in its infancy.

  

  On July 25th, Erik Arnold, an expert from Technopolis Technology and Innovation Consulting and Evaluation Agency in the UK, gave a lesson to scientific research evaluators in China.

  A timely cooperation

  The important motivation of scientific research evaluation is to independently inspect and measure the progress of a research and development project or plan, put forward the defects of the work, and accumulate experience, lessons and knowledge from it, so as to provide feedback for future project implementation and formulation of new policies and plans. Scientific research evaluation is very important for every country.

  In this context, 15 years ago, China’s Ministry of Science and Technology took the lead in introducing the evaluation mechanism into departmental management among various departments in the State Council, the first practitioner of scientific and technological evaluation in China — — The Science and Technology Evaluation Center (NCSTE) of the Ministry of Science and Technology also came into being. At present, it is still the only government agency in China that specializes in scientific and technological evaluation. Zhang Xiaoyuan, director of the Science and Technology Evaluation Center, said: "Although the business of China Science and Technology Evaluation Center has developed greatly in the past 15 years, we also know very well that the science and technology evaluation in China is still in its infancy, so we have a strong willingness to communicate and cooperate with our foreign counterparts."

  Finally, after coordination by many parties, the Science and Technology Assessment Center and relevant British institutions formally launched the "Sino-British Science and Technology Assessment Cooperation Research Project". The important content of the purpose of this project is to further promote and improve the construction of China’s scientific and technological evaluation system and scientific and technological plan evaluation on the basis of fully understanding the respective evaluation status of both sides, so as to better promote future scientific and technological cooperation and exchanges.

  In June this year, the Science and Technology Assessment Center of the Ministry of Science and Technology has sent a five-member expert group to visit nine units, including the Select Committee of the British Parliament, the Department of Innovation Universities and Skills, the British Research Council, the Royal Society and the Technopolis Assessment Agency, and learned about the management mechanism and organizational model of British science and technology assessment, as well as the latest theoretical methods of R&D assessment.

  On July 21st, the British side sent Erik Arnold, a senior science and technology evaluation expert, to China for a week, and paid a return visit to the Chinese Ministry of Science and Technology, the Ministry of Education, the Chinese Academy of Sciences, the National Natural Science Foundation of China and other institutions. On the 25th, the case and experience of scientific research evaluation were introduced and discussed in the Science and Technology Evaluation Center of the Ministry of Science and Technology.

  Yang Yun, an associate researcher at the International Cooperation Department of the Scientific Research Evaluation Center, said: "The gains are too great." This is also the sentence that the reporter heard the most that day.

  Carefully use papers as indicators.

  Arnold’s lecture on that day mainly focused on two evaluation cases conducted by Technopolis, one was the KTS program evaluation in Sweden, and the other was the institutional evaluation of the Norwegian Research Council (RCN). The former focuses on specific plans or projects, while the latter focuses more on a country’s innovation system. Many of the problems involved have inspired the reality of China.

  For example, compared with the so-called objective data such as the number of patents and papers in China to explain the effectiveness of the plan, KTS plan evaluation is useless. Arnold said: "This involves the selection of indicators. Indicators should truly reflect the situation. If you want to examine the transfer of knowledge to industry, it has nothing to do with the number of published papers. At this time, if you choose papers as indicators, it will not explain the problem. And if it is to explain the development of human resources, then how many doctoral theses have been published can explain the problem. "

  Arnold said: "There has been a constant debate about how to choose indicators. Not only in China, but also in scientific research institutions and researchers all over the world will complain that too much attention is paid to the number of papers in the evaluation, but in fact, it is a challenging task for evaluators to find appropriate indicators. When examining the performance of a research institution or individual, whether the number of papers published can explain the problem depends on what the problem is and whether the paper can reflect what you want to explain. "

  Arnold said that the number of papers is widely used for job title evaluation and professor promotion in universities, which will make people feel very unhappy and some people will complain. He said: "Many German university professors engage in research for three to five years after their Ph.D. graduation, and then work hard in the industry for 20 years. At this time, if the university only uses the number of papers to decide whether to hire them, this group of people will be excluded. Because they have worked in industry for many years, they have no energy to publish enough papers. In fact, it’s not good for the development of universities. These people can bring practical industry situation and experience, and let universities know what kind of content the industry needs. Therefore, I hope that universities should not only consider the number of papers when selecting indicators for hiring employees, and try to avoid the problems I mentioned. "

  Innovative signals brought by the evaluation community

  In fact, in addition to being an expert in scientific research evaluation, Arnold is also an expert in scientific and technological innovation policy. He believes that because the plan is formulated by the organization, the evaluation of an organization will change and guide its whole way of doing projects, so it may have a greater impact. In his own words, "Science and technology evaluation is not just about showing evaluation skills and evaluating technologies. We hope to learn more about a country’s science and technology innovation system and generate knowledge to guide the government to use this system to promote innovation and management innovation".

  According to Arnold, the scientific and technological innovation projects that impressed him the most were the Industry-University-Research integration projects in some Nordic countries, and their success had an important feature — — Involve all stakeholders at the beginning of planning and design. He said that if researchers are only allowed to plan and design, they will often pay attention to basic research without considering how to integrate with industry; On the contrary, plans unilaterally designed by industry tend to pay more attention to short-term output. The successful R&D plan should combine the advantages of research and industry, that is, the industry tells researchers what topics they are interested in, and the researchers tell the industry what interesting ways to solve the problems they face.

  In the institutional evaluation of the Norwegian Research Council, Arnold and others found that although the establishment of the Council brought together previously divided scientific research and innovation, the 16 ministries under it did not change their ways of doing things. They will still tell scientific research institutions in great detail what kind of research should be done. This fragmented funding and research practice hinders the right to further develop scientific research and innovation freely. Therefore, they suggest that under such a complicated system, all departments should stop clinging to power and tell the following research institutions what to do in detail, but let them solve their own problems.

  Arnold said that knowledge innovation has quietly changed. In order for the national innovation system to play its role, not only all parts within the innovation system should operate well, but also all parts should have very good connections. He pointed out that in today’s era, the formation and creation of knowledge is no longer like in the past. Under the background of relying on academics, a specific problem is solved in a conservative organizational form. All disciplines are independent of each other, and there is very strict control within the disciplines. Nowadays, the creation of knowledge has gradually developed into a completely different model — — Knowledge formation is based on application, which breaks the boundaries of a single discipline and has a dynamic form of organization and management.

  Correspondingly, scientific research evaluation also needs a new way. Arnold said: "Although people should respect the peer review system as the driving force of scientific development in professional fields, evaluating the quality of R&D only by the technical content contained in innovation itself can no longer fully meet the needs of society. In the past few decades, innovative evaluation methods based on social impact have developed rapidly. In the evaluation practice, if these two methods are combined, it is possible to get more comprehensive results. "

  Science Times (2008-8-7 International)


Posted

in

by

Tags: