...
KNOW-HUB logo KNOW-HUB

Recent

  • The partnership of KNOW-HUB welcomes you to its knowlegde base.

Video

Linked in

Monitoring and evaluation


Innovation policy assessment has become a critical issue at a time when public budgets are less and less available and their use is under scrutiny: investments in specific innovation programmes have to prove their economic impact to be justified.

Written by Gabriella Fiori

Reviewed by David Walburn

 

The concept

How to implement it?

Step in the RIS process

What can be expected?

A quote

References

Experts' comments

 

The concept


Evaluation refers to programme/ policy/ strategy:

This is the systematic assessment of a programme to determine (1) how far it is meeting objectives (and, perhaps, achieving other effects), (2) whether it is meeting these objectives efficiently and effectively, and (3) how the management structures (and other factors) have shaped these results.

Evaluation may be ex ante, monitoring, ex post, etc.; and it may focus on a single project or programme, or extend more broadly to encompass the policy framework within which these are located.

The evaluation being ex ante, in process (monitoring), or ex post, it always needs clear strategic and operational objectives and targets to be set.

  1. Ex ante evaluation focuses on the context in which the programme will be inscribed, the issues addressed, the operational conditions for the initiative implementation as well as on the definition of the strategic objectives and targets to be achieved, the resources to be affected to the programme and the calendar, as well as the impact, results and realisations expected (that have to be translated into appropriate indicators). It constitutes a key steering tool and the foundation for further evaluations.
  2. Monitoring focuses on in process tracking of programme and the compliance of the policy realisations and results with the programme objectives and action plan. It allows adjusting the programme and its implementation if needed.
  3. Ex post evaluation entails the assessment of the relevance, effectiveness and impact of a programme carried out some time after its completion. It may be undertaken directly after or long after completion. The intention is to identify the factors that have determined the programme success or failure, to assess results and impacts, and to draw conclusions that may inform future programmes and policies. Ex post evaluation can prepare the evolution and re-design of a programme.
  4. A key success factor for the evaluation is that the actors involved share the evaluation process and are involved in the definition and production of indicators.

 

How to implement it?


Main steps in designing an evaluation process can be resumed as follows:

  1. Analysis of the context: gathering quantitative and qualitative data (an interesting tool to detect companies support needs and the level of adequacy between these needs and the innovation support instruments, can be provided by a companies’ “barometer”, alias a quantitative and qualitative survey conducted among targeted companies).
  2. Recall (or definition in the case of ex ante evaluation) of the strategic objectives and targets
  3. Identification of the (financial and human) resources affected
  4. Analysis of the realisations, results and impacts achieved both through quantitative and qualitative approaches (surveys, reports, on field interviews…) and comparison with the initial objectives set
  5. Benchmarking with other similar policies or programmes (to have a relative appreciation of your performance compared to others – when applicable)
  6. Comparison with mirror groups when applicable (e.g. a group of companies having being supported in their innovation projects implementation versus companies which have not be aided)
  7. Recommendations and perspectives: main lessons to be drawn from the evaluation and suggestion for the update and evolution of the Strategy/ programme/ project.

The most critical issue in the process is the assessment of the impact. In fact, it is quite difficult to isolate the impact generated by the innovation policy evaluated from other independent variables, such as the overall economic conjuncture or other variables linked to the specific context to which the policy is applied.

Further, innovation is a non-linear process in which it is not possible to establish clear cause-effect relationships.

To successfully implement an evaluation programme the following conditions are required:

  1. A strong commitment of policy makers
  2. The involvement of the actors participating in the programme implementation
  3. Resources to gather and analyse data (both qualitative and quantitative)
  4. Transparent communication of the evaluation results

 

Step in the RIS process


Step 6 - Integration of monitoring and evaluation mechanisms

 

What can be expected


Evaluation, when it is embedded in innovation programme since the outset, allows to take track of realisation and result and to better use public resources. 

Further, it should permit to assess what would have happen if the policy would not be put in place through a counterfactual or additionality analysis (additionality is the change due to the activity, as compared to what would have happened had the activity not been undertaken at all) 

Finally it can be a powerful instrument in changing innovation policy design culture towards a fact-driven approach, and, generally public policy management.

 

A quote


  • γνῶθι σεαυτόν , "Know thyself" - Socrates (Plato, “Dialogues”)
  • “At last but not least, placing a priority in the development of a robust and evolving Regional Innovation Observatory is a must to support the strategy and a key to ensure commitment from politicians to the strategy." - Claire Nauwelaers in her recommendations to the PACA region for the 3S

 

References


  • Smart innovation: a practical guide to evaluate innovation programmes, Louis Legrand et associés for DG industry and enterprise
  • The practice of evaluation in innovation policy in Europe, Jakob Edler, Martin Berger, Michael Dinges and Abdullah Gok
  • ARISE: Accelerating Regional Innovation Strategies Exchanges, IV PCRD
  • SCINNOPOLI: Scanning innovation policy impact, NTERREG IVC, Capitalisation Project

 

Mrs Gabriella Fiori


 

Gabriella Fiori has more than 15 year experience in the field of innovation policy design, implementation and assessment.

After a first working experience in Italy in an organisation bridge between the Research and business world, she moved to France where she joined Méditerranée Technologies the Provence-Alpes-Côte d’Azur regional innovation agency.

She is in charge of the Regional Innovation Observatory set up in 2010, recently acknowledged as national good practice in the framework of the activities led by the French Government to support the 3S strategy elaboration.

Gabriella has also coordinated and participated as partner in several European projects focussing on Innovation policy design, assessment, benchmarking and technology transfer.

She is currently involved, together with the Regional Council and the representatives of the central French government, in the design of the RIS 3 for Provence-Alpes-Côte d’Azur.

fiori@mediterranee-technologies.com

 

Experts' comments


I have already made brief comments on the topic of evaluation[1], but there is an opportunity to here to go into the topic fully. This is a really important issue for economic developers because, as the main paper states, when public funding is scarce, the need to show “proof of impact” to justify spending can be critical.

Unfortunately for economic developers, coming up with what might be safely called “proof” can often be very difficult.  As the author of the paper points out, “innovation is a non-linear process in which it is not possible to establish clear cause-effect relationships”. The same could of course be said of a whole range of economic development tools. What this means is that even when what might be considered as rigorous evaluation indicates that a particular programme is good value for money, the results can be rubbished or ignored by people or organisations with an axe to grind against the intermediary organisation, or are pursuing a different policy agenda.

 In England, regional development agencies were abolished in 2010 despite a major analysis carried out by PriceWaterhouseCoopers during the previous year which concluded that for every £1 of public expenditure made through the agencies there was a £4 boost to regional economies. The abolition was effected with very little adverse political comment, even from those stakeholders in both national and regional economies. Why was this? The reasons suggested here are speculation on my part, but they are worthy of consideration;

  1. For those with an ideological objection to the idea of government intervening effectively in local economies, and who wanted to achieve cuts in public spending, the results of the PWC study were simply ignored or not believed. Even though PWC is an accountancy firm with a high reputation, the fact that the research had been commissioned by the Labour government which lost power in 2010 tainted the results.
  2. Regional Development Agencies never succeeded in establishing a high public profile in their regions or on the national stage. Most people had little awareness of their existence or what they were doing.
  3. As gatekeepers for a great deal of public funding, RDAs often had strained relationships with other regional stakeholders which felt they had as much legitimacy to be active in economic development as the RDAs. This was particularly true of local authorities which felt they had a democratic mandate, whereas RDAs were run by central government and had little, if any, local accountability. The demise of RDAs was not so unwelcome therefore in some quarters.
  4. Just as RDAs failed to connect with the public, they also failed to establish strong supportive links with the sectors and stakeholders they were supposed to be supporting. Although they were described as “business-led”, this only meant that RDAs had board members appointed from the commercial sector by government. Their priorities and funding were set by central government, and did not necessarily connect with the needs of their stakeholders in the regions.

The point to be made here is that rigorous evaluation, important though it is, is never enough and it may not be the most important indicator of a regional organisation’s success.  Cause and effect may be difficult to demonstrate through statistical analysis, but it can show itself in other ways which can be vital for an intermediary organisation.

In general terms, intermediary organisations exist to do things which will improve the performance of the economy in their regions and the life chances of the people living there. If they are doing their job well people should both be aware of this, and be supportive of the regional organisation’s role. Even if this positive effect is difficult to pick up in the statistics, the support of regional citizens and stakeholders is an important marker. If there would be a cry of protest if development services were threatened, then regional organisations could be judged to be doing something right. It is important that regional organisations do not leave this building of support to change. The impact of their programmes shown in rigorous  evaluation and human responses needs to be widely communicated.

It is also important for intermediary organisations to be aware of the potential for bias and perceived bias in the way that evaluation of their work is carried out, the “ex post evaluation” When a programme has been launched there will often have been much political and managerial capital invested in it, quite apart from the public money involved. An evaluation which demonstrated that a programme was misconceived and had wasted resources could be damaging to an agency’s reputation, and the politicians behind it. Add to this mix that a number of consultancy firms have built lucrative businesses working for intermediary organisations doing external evaluations, and that there may be a reluctance to compromise a good client by undermining its credibility. Then further add what we already know about the difficulty of demonstrating true cause and effect in the operation of many economic development programmes, and one can see how the process of evaluation can be compromised. Even if none of this dynamic is actually in play, a perception that it might be could undermine the credibility of any findings. Regional organisations need always to show caution in their claims about the efficacy of their work, especially if such claims are based chiefly on quantitative data without the clear support of stakeholders and participant organisations.

A few years ago, the idea of “evidence-based policies” was very popular in many spheres of government, though in the end the politics frequently outdid the evidence when it came to the crunch. A similar dynamic is at work for intermediary organisations. Monitoring and evaluation of programmes is vital for their credibility, but if they forget that they operate in a political context they will be in trouble.

 

Mr David Walburn


After a career in business David Walburn joined Greater London Enterprise in 1986 where he was responsible for venture capital and other small business support, before becoming Chief Executive of the organisation. He was the Chair of the London Business Angels Network and played a key role in the setting up of the European Business Angels Network. He has worked with the UK government and the European Commission on developing public policy initiatives to improve the financing of small and medium-sized enterprises. He was the Chair of Capital Enterprise, the umbrella body for organisations supporting micro business development in London, until 2012.

For the last ten years he has been a Visiting Professor at London South Bank University where he headed the Local Economy Policy Unit and was the managing editor of the journal Local Economy.

He has served as President of EURADA, and been a member of a number of advisory bodies of the European Commission.  He has been an active member of the International Economic Development Council in Washington DC and has a wide range of international contacts with economic development organisations.

He continues to write and lecture on small business finance and regional economic development.

davidwalburn@london.com

 


login
newsletter