Developing good performance information (RMG 131)

Important notice: Please be advised that the information previously contained in 'RMG-131A: Developing Performance Measures' has been merged into the updated 'RMG-131: Developing good performance information'. RMG-131A is no longer available on the Finance website and its contents can only be found on this page. 

Audience

This guide is relevant to accountable authorities, Chief Financial Officers, Chief Operating Officers, program managers and officers responsible for measuring and reporting on the performance of activities delivered by a Commonwealth entity.

Commonwealth companies may use aspects of this guide to assist them in meeting their obligation to produce annual corporate plans under section 95 of the PGPA Act and sections 16E and 27A of the Public Governance, Performance and Accountability Rule 2014 (PGPA Rule).

Purpose

This guide provides practical information to support officials of Commonwealth entities in developing good performance information. It also provides guidance on the requirements, as prescribed by section 16EA of the Public Governance, Performance and Accountability Rule 2014 (PGPA Rule), for performance information developed by entities, and replaces Quick Reference Guide – RMG 131 Developing good performance information.

This guide does not:

  • provide definitive technical advice on how to design performance measures; nor
  • prescribe a generic set of standard performance measures to be reported by Commonwealth entities.

Resources

1. Introduction

  1. All Commonwealth entities and companies have purposes. These are the strategic objectives that the entity intends to pursue, or make a significant contribution to achieving, over a reporting period. The purposes of an entity include the objectives, functions or role of the entity.
  2. Performance measurement involves collecting, analysing and reporting information about the performance of an entity or company against its purposes.
  3. Having effective performance reporting and monitoring arrangements is a key aspect of good governance.
  4. Effective performance measurement enables entities to:
    • measure and assess their progress toward achieving their purposes;
    • identify what policies and activities work and why they work in order to inform policy development;
    • drive desired changes in the efficiency and effectiveness of services;
    • demonstrate whether the use of public resources is making a difference and delivering on government objectives;
    • make decisions about how best to deploy its resources to achieve competing priorities; and
    • demonstrate and promote their achievements and explain any variance from expectations or reference points/enables entities to identify and report on their achievements;

It is also important for accountability, as entities are accountable for their activities to the Parliament and ultimately to the public who depend on them.

  1. Accountable authorities are required to measure and assess the performance of the entity in achieving its purposes. One of the objects of the Act is to require Commonwealth entities to provide meaningful information to the Parliament and the public to assist them in understanding how entities are performing, and how they are using the resources that have been entrusted to them.
  2. Entitles must also keep records that properly record and explain the entity’s non-financial performance. The ability of an entity to measure and assess its performance depends on accurate and complete records of data. This should include sufficient documentation of the rationale for the overall design of the performance framework, the types of performance measures used, of data sources, collection methods, procedures, and clear management trails of data calculations.
  3. There is no one-size-fits-all approach for all entities – entities need to develop a performance framework that is appropriate to its purposes and activities. This guide focuses on the value and features of good performance information, and developing the performance measures.
  4. Generally, the cycle of developing performance information will involve several stages:
    • Planning, including deciding what performance information is needed (may be assisted by program logic models and theories of change)
    • Selecting performance measures
    • Conducting activities (e.g delivering goods and services)
    • Monitoring and assessing performance
    • Analysing (and reporting on) performance
    • Evaluating performance and performance information and modifying as necessary.
  5. However, this guide will predominantly focus on planning, developing performance measures, and collecting data.

1.1 Requirements under the Commonwealth Performance Framework

  1. The Commonwealth Performance Framework is established by the PGPA Act, and requires entities and companies to demonstrate how public resources have been applied to achieve their purposes. The framework requires entities and companies to develop performance information, and acknowledges that appropriate performance information that best measures achievement will depend on the operating context in which purposes are pursued.
  2. The key performance documents required to be produced under the Commonwealth Performance Framework and the relevant timeframes for their publication are presented in Figure 1. Reporting under the framework requires:
    • Commonwealth entities to prepare Portfolio Budget Statements for presentation in the Budget each financial year. The Portfolio Budget Statements provide additional explanation and context on the annual appropriations, and map financial information against high level performance measures. 
    • Commonwealth entities and companies to prepare corporate plans by 31 August each financial year (or by the end of February for entities and companies that operate on a calendar year basis). Corporate plans are the primary planning documents for entities and companies, and set out purposes, key activities, operating context, and the performance measures against which performance will be measured. Entities must include performance measures in their corporate plans.  These form the basis for the measurement and assessment of an entity’s performance against its purpose(s).
    • Commonwealth entities to include annual performance statements in their annual reports at the end of each reporting period (usually financial year). Annual performance statements enable an assessment of the extent to which an entity has succeeded in achieving its purposes, and an analysis of the reasons for the level of achievement. Commonwealth companies report on their actual performance results in their annual report.
  3. Entities must comply with the requirements of the PGPA Act and associated subordinate legislation and instruments (e.g. PGPA Rule, Finance Secretary’s Direction) in developing and publishing their performance information.

Commonwealth Performance Framework

Figure 1: Interaction of the key reporting documents within the Commonwealth Performance Framework.

  1. For further information on preparing corporate plans, annual performance statements and Portfolio Budget Statements, refer to:
    • Resource Management Guide No. 132: Corporate plans for Commonwealth entities
    • Resource Management Guide No. 134: Annual performance statements for Commonwealth entities
    • Guide to preparing the Portfolio Budget Statements

2. Developing performance measures

Introduction

  1. The following sections discuss concepts for developing performance measures, and the specific requirements for performance measures under section 16EA of the PGPA Rule.

Translating high level purposes into activities

  1. The development of performance measures should begin with consideration of the objectives (or purposes) that the entity seeks to achieve. Purposes tend to be high-level and general by nature. Therefore, it is important for effective performance measurement that these high-level purposes are translated into lower-level, operational objectives (e.g. activities under specific areas of focus) that are more specific.
  2. Purposes are the strategic objectives that an entity intends to pursue over a reporting period. Activities are distinct efforts undertaken to achieve a specific result. What represents an ‘activity’ is for entities to define, and will depend on the nature of the entity, including the complexity of its activities and its operating context. One activity may also contribute to multiple purposes, or a purpose could be achieved through multiple activities.
  3. Activities should be aligned with the entity’s purposes, and generally represent a natural focus for the development of performance measures. Performance reporting at the activity level should focus on what the activity is intended to achieve and the contribution it is expected to make to fulfilling the entity’s purposes.
  4. For further information on activities, refer to RMG 132: Corporate plans for Commonwealth entities.

OUTCOME

Strengthening the sustainability, capacity and diversity of our cities and regional economies including through facilitating local partnerships between all levels of government and local communities; through reforms that stimulate economic growth; and providing grants and financial assistance

PURPOSE

Supporting regional development, cities and local communities

PROGRAM

Program 3.3 – Cities

ACTIVITIES

Lead development on cities policy, including sustainability

Contribute to whole-of-government activities including the Sustainable Development Goals and the Indigenous Advancement Strategy

Deliver, monitor and evaluate the Smart Cities and Suburbs Program and advise on smart cities policy

PERFORMANCE MEASURES

Improved liveability and increased productivity in Australia’s cities

Improved access to jobs and reduced congestion in Australia’s cities

Figure 2: Translating purposes into activities in the Department of Infrastructure, Transport, Regional Development and Communications 2019-20 Corporate Plan. In this example, the Department of Infrastructure, Transport, Regional Development and Communications has broken down its presentation of performance measures based on outcome and program. It has developed activities which stem from, and are aligned with, its purpose, and subsequently developed performance measures related to those activities that demonstrate how the entity contributes to achieving its purpose. It would not be expected that every activity would have a performance measure dedicated to it.

Using logic models

  1. Logic models can be a useful tool in developing performance measures. A logic model is a visual representation of a program’s “theory of change”. It describes how an intervention (that is, a set of activities designed to achieve a change) is assumed to contribute to a chain of results flowing from the inputs and activities to achieve short, medium and long-term outcomes. 
  2. Logic models can be used to: 
    • help create a theory of change that can connect efforts within an entity’s direct control (e.g. processes or outputs) to high-level outcomes of that effort (outcomes over which the entity has little influence);
    • provide a useful tool for program planning, implementation and evaluation (including testing the validity of assumptions about how change occurs);
    • identify appropriate performance measures to measure success.
  3. Logic models can assist entities to identify the most appropriate information around which to develop performance measures by assisting entities to define what constitutes “success”, or the extent of achievement. Figure 3 below provides an example of a logic model in the case of an anti-smoking campaign. In the context of Figure 3, success could be described in terms of the specific attributes of an activity (e.g. social marketing and advertising campaigns delivered in X areas), output (exposure of people to anti-smoking messages and enforcement of cigarette sales regulations), or outcome/impact (e.g. lower rates of smoking and reduced smoking-related disease in the community).  Measures of success form a natural focus for the development of performance measures to assist an entity to demonstrate achievement. However, it is important to note that logic models do not test cause-effect for an intervention.
  4. Because a logic model explains the assumptions behind the expected sequence of activities, outputs, and outcomes, it can also assist entities to determine the appropriate time to measure and assess outputs, and both shorter- and longer-term outcomes. 

Logic model

Figure 3: Example of a logic model for an anti-smoking campaign. Adapted from NSW Government, “Developing and Using Program Logic: A Guide,” available at: https://www.health.nsw.gov.au/research/Pages/developing-program-logic.aspx

  1. Good logic models do not include details about everything that happens in an intervention, but summarise the aspects that are important in explaining how the intervention produces the changes that it is aiming to achieve.

3. Requirements for performance measures

Requirements under the PGPA Rule for corporate plans

  1. Section 16EA of the PGPA Rule sets out the requirements that all performance measures must meet for the purposes of section 16E of the PGPA Rule in preparing a corporate plan for a Commonwealth entity.[1]
  2. The performance measures for an entity meet the requirements of section 16EA if, in the context of the entity’s purposes or key activities, they:

 

Requirements for performance measures

Description

(a)

relate directly to one or more of the entity’s purposes or key activities

The performance measures should be directly aligned to what the entity is trying to achieve, as outlined in an entity’s purposes or key activities.

(b)

use sources of information and methodologies that are reliable and verifiable

Performance measures should be supported by clearly identified data sources and methodology. Data sources should provide data that is reliable and able to be verified. The methodologies used to assess performance should be able to produce data that is reliable, be applied consistently, and be able to be substantiated.

(c)

provide an unbiased basis for the measurement and assessment of the entity’s performance

The performance measures should provide an objective basis for assessment. This means that performance measures, together with the details of data sources and methodologies, should provide confidence to a reader that the basis of measurement is free from bias.

(d)

where reasonably practicable, comprise a mix of qualitative and quantitative performance measures

Where reasonably practicable, the assessment of performance should be supported by a mix of qualitative and quantitative performance measures. This recognises that there may be circumstances where an entity will have only qualitative or only quantitative measures.

(e)

include measures of the entity’s outputs, efficiency and effectiveness  if those things are appropriate measures of the entity’s performance

An entity’s performance measures should facilitate an assessment of how an entity’s activities support the achievement of its purposes, through the measurement of outputs, efficiency and effectiveness.

(f)

provide a basis for an assessment of the entity’s performance over time

Performance measures should enable an entity to demonstrate its performance in achieving its purposes or key activities over time. Generally, trends in performance measured on a consistent basis over time will be more informative than standalone or discontinuous measurement of performance.

 

Directly relates to purposes or key activities (16EA(a))

  1. Each performance measure must relate directly to one or more of the entity’s purposes or key activities. That is, each performance measure should be aligned to what the entity is trying to achieve. 
  2. For example, for an entity with purposes relating to policing and national security, and activities relating to investigating serious crime, performance measures such as:
    • ‘Percentage of cases before the court that result in conviction’ and 
    • ‘Positive return on investment for investigation of crime’ are ones that relate to the entity’s purposes or key activities.
  3. Similarly, for an entity with purposes relating to regulation, and key activities relating to monitoring compliance and taking enforcement actions: 
    • ‘Level of compliance with [specific statutory obligations] by regulated entities’ and 
    • ‘Proportion of decisions upheld upon review’ are directly aligned to its purposes or key activities. 
  4. By contrast, for an entity with purposes or key activities relating to the provision of policy advice to Government on legal matters, readers may find it difficult to understand how measures such as ‘Number of visitors to the entity’ or ‘Investment in new technologies ($)’ relate to the entity’s purposes. There is no clear connection between these measures and the purposes or key activities of the entity. Further, it may be difficult for the entity to demonstrate its level of achievement against its purposes through such measures.
  5. An entity’s corporate plan should provide a clear and understandable link between purposes, the key activities that assist in achieving these purposes and the performance measures that will be used to measure and assess performance.
  6. The corporate plan should also will provide a reader with information on who will benefit from the activities to be undertaken and how they will benefit. 
  7. The nature of many Commonwealth activities means that other jurisdictions and/or other parties such as the not-for-profit sector often contribute to the achievement of objectives or outcomes pursued by Commonwealth entities. These objectives or outcomes can be viewed as ‘common objectives or outcomes’. The performance measures developed by an entity should be designed to measure the contribution it makes, recognising that the performance of other jurisdictions or parties will impact on the achievement of these common objectives. 
  8. In developing performance measures for activities that contribute to common objectives or outcomes, it is expected that entities will focus on those activities that make a significant contribution to these common objectives or outcomes, rather than contributions that are minor or incidental to their achievement.
  9. The following example includes information on the contribution that the Department of Infrastructure, Transport, Cities and Regional Development makes to the performance measure on the number of road fatalities.
     

Example 1 – Demonstrating how performance measures relate to purposes of the entity

The Department of Infrastructure, Transport, Cities and Regional Development 2019-20 Corporate Plan presents the department’s performance measures under the corresponding departmental programs and outcomes, identifying how these are related to the department’s purposes. The performance measures in the extract below from page 16 are relevant to “Program 2.2 – Road Safety”, which is aligned to the department’s purpose of “making travel safer.” These measures are part of a number of measures the department describes as ‘whole-of-community’ measures which focus on Australian community outcomes the department seeks to influence, noting that a number of factors are at play in the delivery of these results. The presentation of these ‘whole-of-community’ measures assists the reader to understand how the performance measures relate to the entity’s purposes and the way the department contributes to each result.

Success measures example

 Reliable and verifiable (16EA(b))

  1. The performance measures must use sources of information and methodologies that are reliable and verifiable[2].
  2. Performance measures should be supported by clearly identified data sources and methodologies. Data sources should provide data that is reliable and able to be verified. Methodologies used should be able to produce accurate data, be applied consistently (both by different users at a given point-in-time and by similar users over different reporting periods), and be able to be substantiated so that the processes followed in generating data for each measure are able to be validated.
  3. Entities should maintain records that properly document the data sources and methodologies to be used to measure its performance against each of its performance measures.
  4. Data sources may include research and evaluation centres (such as the Bureau of Infrastructure, Transport and Regional Economics), internal databases or systems (such as payment or call centre systems), or external sources (such as the Australian Bureau of Statistics or OECD reports).
  5. Methodologies may include participant and stakeholder surveys, benchmarking, evaluation or data mining.
  6. Entities should avoid using vague language like ‘qualitative assessment’ but rather identify a specific methodology (such as accessing results from internal databases, participant and stakeholder surveys conducted by an independent organisation, or calculations that rely on inputs from specified internal or external sources).
  7. Entities should avoid using vague language such as ‘timely’ or ‘ready to implement’ when describing timeframes and intended results. Precise parameters should be available to a reader applying performance assessment. Without this information, there is a potential for bias in the reported result and it may be difficult for the measure to be measured consistently, or be substantiated.
  8. For example, a measure such as ‘Government measures are legislated and implemented in a timely manner’ does not make clear what the measures are, what ‘timely’ means, nor what implemented means. Does the entity have a timetable for implementation? The language used should be clear to enable consistent measurement and enable the reader to understand what is being measured and how.
  9. By contrast, the information presented in Example 2 below allows the reader to understand what is being measured and how, and provides confidence that the data can be substantiated and verified. 

Example 2 – Demonstrating that performance measures are reliable and verifiable

The Department of Infrastructure, Transport, Cities and Regional Development 2019–20 Corporate Plan (p 9) includes a range of performance measures for each program. The table below provides details of performance measures, targets over the life of the corporate plan, the sources of data, and the relevant methodology.

Success measures example

Unbiased basis for measurement & assessment (16EA(c)

  1. To avoid false conclusions, the performance measures must provide an unbiased basis for assessment of the entity’s performance. 
  2. When designing an approach to collecting and/or analysing information, entities should consider how bias might be introduced. 
  3. Common sources of bias include:

Exclusion bias

Occurs when relevant information is not collected (for example, where information is not collected on activities that make key contributions to fulfilling a purpose).

Sampling bias

Occurs when individuals or groups are disproportionately represented in the sample from which information is collected (for example, when key stakeholders are omitted from the sample group).

Interaction bias

Occurs when the sample group is aware that it is being observed/tested and changes its behaviour either consciously or subconsciously (for example, when people perform differently because they know they are being observed/tested).

Perception bias

Occurs when the people collecting information have preconceived ideas about how a system should behave or about what the results will be.

Operational bias

Occurs when the process for collecting information is not followed or when errors are made in the recording and analysis of data.

  1. Case studies and reviews selected ‘after the fact’ (that is, after a particular activity has begun or has been completed, or after the entity’s period for performance has ended) also introduce the potential for bias. Case studies should not be relied upon as a stand-alone measurement unless the scope of the case study is predetermined, the activities clearly stated and the measurement methods determined in advance. This avoids the risk of introducing bias where only favourable case studies that tell ‘success stories’ are selected. Case studies and details of information to be collected should therefore be decided at the time the corporate plan is developed and before information collection occurs. Sufficient information should be included in the corporate plan to provide confidence to the reader that the selection of case studies and reviews are unbiased.

Qualitative & quantitative performance measures (16EA(d))

  1. An entity’s performance measures must, where reasonably practicable, comprise a mix of qualitative and quantitative performance measures.
  2. In most cases, it is expected that an entity will have both quantitative and qualitative performance measures to capture the multiple dimensions of an entity’s performance. However, it may not always be ‘reasonably practicable’ for the entity to have qualitative measures in the context of the entity’s purposes or key activities. For example, in certain limited circumstances, such as in the early stages of the implementation of a program or activity, it may be appropriate for an entity to have solely quantitative performance measures. As implementation progresses, it could be expected that qualitative measures may also be developed.
  3. For a key activity focused solely on policy development, deciding on the appropriate mix of quantitative and qualitative performance measures that are ‘reasonably practicable’ should take into account factors such as the cost of data collection, the value of the data to the entity and the needs of stakeholders. 
  4. For an entity with purposes relating to policing and national security, quantitative measures that measure outputs such as ‘percentage of investigations that result in a prosecution or intelligence referral outcome’ may be appropriate. To provide a more complete picture of performance, these quantitative measures could be supplemented with qualitative measures such as ‘level of community confidence in the contribution of the entity to law enforcement and security’.
  5. In the case of service delivery functions, while the speed of processing claims may be an important quantitative measure of performance, the value of this information would be strengthened by combining it with a qualitative performance measure, such as the quality of decision-making based on the results of a survey of key stakeholders, and measures that assess the efficiency of claims processing.
  6. The following examples include a mix of qualitative and quantitative measures to assist the reader in gaining a more complete understanding of an entity’s performance.

Example 3 – Mix of quantitative and qualitative performance measures

The Old Parliament House 2019-20 Corporate Plan (p 11) includes a mix of quantitative and qualitative performance measures to measure performance. For Strategic Priority 1, quantitative measures are used to report on the number of visitors (both onsite and to the entity’s website) and the number of people participating in programs, and a qualitative measure is used to measure the quality of service (visitor satisfaction). The corporate plan also notes why static targets are used. This assists the reader in gaining a more complete understanding of the entity’s performance in delivering exhibitions, events, digital experiences and programs.

Example 4 – Mix of quantitative and qualitative performance measures

The Department of Veterans’ Affairs 2019-20 Corporate Plan (p 17) includes a mix of quantitative and qualitative performance measures to measure performance. Quantitative measures are used to report on service delivery (for example, the timeliness and quality of service delivery), and a qualitative measure is used to measure the quality of service (customer satisfaction with their service delivery experience).

Success measures example

Measures of outputs, efficiency and effectiveness (16EA(e))

  1. An entity’s performance measures must include measures of the entity’s outputs, efficiency and effectiveness, if these are appropriate measures of the entity’s performance in the context of its purposes or key activities.
  2. Historically, output measures have been the predominant means by which many entities have measured their performance.  However, an entity’s proper use of public resources can reasonably be expected to include the capacity to measure and assess the effectiveness of an entity’s key activities, and to assess the efficiency of these activities, where this is appropriate in the context of the purposes or key activities of the entity. While output measures should continue to be used, particularly where they are a proxy for the measurement of effectiveness, it would generally be appropriate for entities to have a suite of performance measures that consist of a mix of output, effectiveness and efficiency measures.   
  3. For an entity with key activities involving the development and provision of policy advice to government, the development of output and effectiveness measures may be sufficient as efficiency measures may not be appropriate when the cost and reliability of data collection and the needs of stakeholders are taken into account.
  4. In some cases it may be appropriate to use a proxy measure (that is, an indirect measure of the activity which is strongly correlated with the activity), for example, when data is not available or cannot be collected at regular intervals, or when the cost of gathering and analysing information is prohibitive or outweighs the potential benefits of collecting and reporting on it.
  5. Where impact of activities is otherwise difficult to assess, output measures may be an appropriate assessment proxy for whether services have reached their intention, at the right time and cost.
  6. If proxy measures are used, entities should include in their corporate plan an explanation of why they are being used, demonstrate why the proxy measure is suitable and outline their approach to the development of effectiveness and/or efficiency measures in the longer term. 
  7. A discussion of output, effectiveness and efficiency measures follows.

Output measures

  1. Output measures assess the quantity and quality of the goods and services produced by an activity (including their volume or quantity).
  2. Some common types of output measures are:
    • Activity - measures the volume of work undertaken. For example, this may be the number of services provided, the number of service recipients, or the number of goods produced.
    • Access - measures how easily the intended recipients can obtain a good or service. There are several dimensions to access, such as timeliness of access (for example, measured in waiting times, processing times, time taken to produce an output), affordability of access (for example, the out-of-pocket cost of medical services), and service availability (for example, 24/7 availability of online services).
    • Quality - measures how fit for purpose a good or service is. For example, this may be the extent to which outputs conform to certain standards (such as legislative or service standards). There are several dimensions to quality, such as accuracy (for example, accuracy of payments, decisions, etc.), safety of the good or service, and responsiveness to customer needs (for example, levels of customer satisfaction).

Efficiency

  1. Efficiency is generally measured as the price of producing a unit of output, and is generally expressed as a ratio of inputs to outputs. A process is efficient where the production cost is minimised for a certain quality of output, or outputs are maximised for a given volume of input. In a public sector context, efficiency is generally about obtaining the most benefit from available resources; that is, minimising inputs used to deliver the policy or other outputs in terms of quality, quantity and timing. 
  2. Examples of efficiency measures may include:
    • Cost per benefit payment;
    • Cost per inspection; or
    • Processing cost per grant.
  3. Key activities that are transactional in nature (such as the processing of welfare or grant payments, the operation of call centres or revenue collection functions) lend themselves to efficiency measurement, in addition to the measurement of outputs and effectiveness to provide a complete picture of the performance of the entity. 

Effectiveness

  1. Measures of effectiveness assess how well an entity has delivered on its purposes; that is, whether the activities of the entity have had the intended impact.
  2. Measures of effectiveness might include:
    • the change in literacy rates as a result of activities focused on raising the national reading standards in primary school children; or
    • the change in the number of workplace injuries as a result of work health and safety regulation aimed at reducing workplace injuries.
  3. Although the examples of effectiveness measures offered above are quantitative in nature, effectiveness will not always be able to be measured in quantitative terms. For many activities undertaken by the public sector, qualitative measures will complement quantitative measures. For example, people receiving education to improve literacy skills might be surveyed to understand the impact that an increase in literacy had on their quality of life.
  4. The following are a number of examples of output, efficiency and effectiveness measures reported on by entities in their corporate plans. These examples are provided to inform entity thinking in the context of their particular circumstances and are not intended to be prescriptive.

Output measures

  •  Services Australia, 2019-20 Corporate Plan (pp 14-15):
    • Customer satisfaction with their service delivery experience (quality)
    • Customers receive payments free of administrative and/or processing errors (social security and welfare) (accuracy/quality)
    • Face-to-face service level standards - average wait time (timeliness)
  • Strategic objective: advise and assist everyone to understand their rights and obligations (Australian Building and Construction Commission, 2019-20 Corporate Plan, p 9)
    • Number of site visits undertaken nationally (activity)
    • Surveyed stakeholders who are satisfied or highly satisfied with the quality and timeliness of advice and assistance provided (quality of service)
  • Strategic priority: Assist the Minister for Industrial Relations to foster safe, fair and productive workplaces (Attorney-General’s Department, 2019-20 Corporate Plan, p 9)
    • Average processing time of Fair Entitlements Guarantee claims is 14 weeks (timeliness)
    • 95% of Fair Entitlements Guarantee claim payments are correct (accuracy/quality)
    • 80% of claimants are satisfied with the department’s administration of the Fair Entitlements Guarantee (quality)
      • 80% of insolvency practitioners are satisfied with the department’s administration of the Fair Entitlements Guarantee (quality)
Efficiency measures
  • Levy collection processes cost no more than 1.2% of levies disbursed (Department of Agriculture, 2019-20 Corporate Plan, p 18)
  • Claims administration cost as a ratio of all claims expenses is 17 per cent or lower for each injury year (Comcare, 2019-20 Corporate Plan, p 11)
  • Costs per $1000 of assets supervised by APRA (Australian Prudential Regulation Authority, 2019-20 Corporate Plan, p 26)
Cost per employee outcome (measured as the number of jobseekers employed three months after participation in jobactive) (Department of Employment, Skills, Small and Family Business, 2019-20 Corporate Plan, p 30)
Effectiveness measures
  • Proportion of job seekers employed three months following participation in employment services (Department of Jobs and Small Business, 2019-20 Corporate Plan, p 12)
  • Number of road fatalities; serious injuries due to road crashes (Department of Infrastructure, Transport, Cities and Regional Development, 2019-20 Corporate Plan, p 9)
  • Increased proportion of employees who have returned to work, measured by duration on incapacity benefits (this complements the survey-based measure of return to work for the Comcare scheme) (Comcare, 2019-20 Corporate Plan, p 11)
Regulated entities report that our regulatory approach improves WHS outcomes (Comcare, 2019-20 Corporate Plan, p 13)

Basis for assessment over time (16EA(f))

  1. The performance measures of an entity must also provide a basis for an assessment of the entity’s performance over time. 
  2. Performance measures should be developed and reported in a manner that assists a reader to understand an entity’s performance over time. Many of the objectives of government are ones that will only be achieved over the medium to long term. Therefore, the ability to measure the efficiency and effectiveness of an entity’s activities over time provides an entity and relevant stakeholders with a more informed view of an entity’s progress in achieving its purposes, enabling an entity to make more informed decisions about the success or otherwise of existing activities. This enables an entity to demonstrate that it is delivering short, medium and/or long term impacts that are essential milestones in the achievement of entity purposes and in the delivery of government priorities over time.
  3. To be most meaningful, entity performance should be measured in a consistent manner over a period of time, particularly as many activities undertaken by entities have objectives or outcomes that have longer term horizons. Generally, trends in performance measured on a consistent basis over time will be more informative than standalone or discontinuous measurement of performance. 
  4. However, it is recognised that an entity’s performance measures can be expected to change from time to time as entities progressively enhance their approach to the measurement of their performance. Similarly, an entity’s performance measures can be expected to change as programs move from implementation to ongoing activities. As mentioned earlier, at the initial stages of implementation, quantitative measures that measure outputs may be the most appropriate. Once a program or activity is ongoing, it will generally be appropriate for performance measures to be developed that measure effectiveness or impact and, for activities that are transactional in nature, measures that assess efficiency. Entities should therefore attempt to achieve a balance between maintaining consistency in how performance is measured and periodically improving its approach to performance measurement. 
  5. Where changes are made to performance measures, an entity’s corporate plan should contain sufficient detail about the changes made to enable a reader to understand the rationale for the changes and why the changes improve the way performance is measured for the activities concerned. 
  6. The following example from the Department of Education includes performance measures that assess performance over a range of timeframes. The example from the Department of Infrastructure, Transport, Cities and Regional Development also demonstrates how an entity can develop trend data over time.

Example 5 – Assessment of performance over time

The Department of Education 2019-20 Corporate Plan (pp 19-22) presents performance measures that cover a mix of timeframes. For example, the targets ‘At least 18% of domestic undergraduates are from a low socioeconomic background’ and ‘At least 90% of child care payments to all services are accurate’ address the shorter term, while the targets ‘Halve the gap for Indigenous students in year 12 (or equivalent) attainment rates by 2020’ and ‘Lift the Year 12 (or equivalent) or Certificate III attainment rate to 90% by 2020’ address the medium term. Finally, ‘Australia’s share of the world’s top 10% most highly-cited research publications remains above the OECD average’ addresses the longer term.

Example 6 – Assessment of performance over time

The Department of Infrastructure, Transport, Cities and Regional Development 2019-20 Corporate Plan presents performance measures that are used consistently across several reporting periods. One example of this is the performance measure relating to the number of road fatalities (measure number 3 in Figure 6A), which is adopted and identified in the department’s corporate plan over several reporting periods. This consistency in measurement enables the development of trend data, as seen in the department’s 2018-19 annual performance statement (Figure 6B) where a trend line can be seen, showing a trend toward an overall reduction in the number of fatalities over time.

Figure 6A: 2019-20 Corporate Plan (p 9)

 

Figure 6B: 2018-19 Annual Performance Statement (p 65)

 

Example 7 – Assessment of performance over time

The Department of Social Services 2019-20 Corporate Plan presents performance measures that are used consistently across several reporting periods. One example of this is the performance measure relating to the percentage of recipients who are not receiving income support within 3/6/12 months after exiting student payments. This consistency enables the development of trend data over time, as seen in the department’s 2018-19 annual performance statement (p 26; below). The annual performance statement shows a comparison table of the results for the current and previous two reporting periods, allowing an assessment of overall performance over time.

Common issues

  1. Some common issues when developing performance measures are:
    • performance information is too focused on outputs and does not go the impact made, value created and outcomes achieved as a result of the entity’s activities;
    • performance measures are selected on the basis that they are the easiest to measure (even if they don’t offer meaningful insight);
    • too many performance measures are developed, which can obscure relevant information for decision-makers who often have restrictions on their available time.
  2. Another common issue when developing performance information is that performance measures may be too heavily weighted towards management-level information, or information on enabling activities (activities which do not, in themselves, contribute towards the achievement of the entity’s purpose(s), but must be performed in order to make it possible to pursue the entity’s purpose(s)) Performance measures relating to such activities can be useful for internal management purposes but should not be used as a substitute for measures of effectiveness and efficiency. For example:  
    • Level of engagement of internal employees (as measured by annual employee surveys);
    • Level of diversity among characteristics of internal employees across different employment classifications;
    • Level of satisfaction with the delivery of internal ICT Services;
    • Whether or not the entity has appropriate insurance arrangements in place for the financial year.

[1] Section 16EA was inserted into the PGPA Rule by the Public Governance, Performance and Accountability Amendment (2020 Measures No. 1) Rules 2020.

[2] Reliable and verifiable are intended to have their ordinary meanings. Reliable may be defined as ‘able to be relied on; trustworthy’, and verifiable may be defined as ‘able to be proven as true, as by evidence or testimony; able to be confirmed or substantiated’ (Macquarie Concise Dictionary, 7th ed, 2017).

4. Collecting performance information

  1. Quality performance information is underpinned by quality data.
  2. Entities should identify data sources and assess collection methods as part of the development of performance measures to determine whether suitable information will be available at the end of the reporting period to enable the meaningful and accurate measurement of performance.
  3. The following considerations are important when deciding how to collect performance information (and what performance information to collect):
    • Availability – is the information available already (e.g. in existing administrative systems)? If not, is it feasible and cost-effective to collect it?
    • Suitability – can accurate and complete information be collected? Can it be verified? For example, data may not be reportable without making omissions to protect sensitivity or commercial confidentiality. In such cases, the remaining data may not be sufficient to provide a reliable view of performance.
    • Timeliness – can information be collected on a timescale that suits the purpose to which it will be put? Information that takes several months to collect and analyse may be relevant to an evaluation towards the end of the life of an activity, but it may not be useful for the purposes of measuring performance in a particular reporting period.
    • Cost – What is the cost of collecting information? The cost of producing and analysing performance information should be balanced against the benefit the information provides and should not detract from an entity’s ability to deliver activities and achieve its purposes
  4. The frequency at which information needs to be collected (and the volume of information required) will also influence how it is collected. Extensive Information that needs to be collected frequently may justify investment in systems that allows the automation of data collection. 
  5. Collecting information using electronic (e.g. web-based) forms that feed responses directly into a spreadsheet or database is likely to be a better solution than paper-based forms (where individual responses can be easily separated from a physical file, and errors can occur when information is entered manually into a spreadsheet).  
  6. When analysing data to get a meaningful picture of performance, comparisons should be considered, e.g. year by year, using benchmarking, or comparing groups that did and did not receive the intervention. However, not all activities are suitable candidates for benchmarking or comparing with similar activities, and it can be difficult and costly to ensure like-with-like activities or processes are being measured. 
  7. When designing an approach to collecting and/or analysing information, entities should consider what bias might be introduced by the collection method. For example, while electronic online surveys may be cheaper to administer (compared to paper ones), care is needed to ensure that collection approaches do not result in a biased sample that could lead to inferences that were not applicable to segments of the population for whom these collection methods do not apply.

Methods for measuring performance

  1. Some common performance measurement methods include:
    • Data mining - where performance information is collected (or extracted from existing data collections) and presented in numerical form;
    • Surveys – where information is sought from delivery partners and/or external stakeholders to understand the results of an activity;
    • Peer reviews – where assessments of performance are provided by either experts or stakeholders;
    • Evaluations – where information from diverse sources is combined to understand the factors that influence the effectiveness of an activity.  
  2. With the exception of data mining, each of these methods can be used to collect both quantitative and qualitative information that provides insights into performance. 
  3. The following tables describe the typical approaches, relevant tools, strengths, and limitations associated with these methods.

Data mining

Description

Data mining is the process of identifying trends or patterns in large data sets, and can be used to turn raw data into useful performance information.

Such performance information is typically represented as whole numbers, percentages, proportions, trends (e.g. increase or decrease from an established baseline figure), statistics (e.g. average, median or standard deviation) and ranks.

Common examples include the dollar cost of delivering an activity, the number of full-time equivalent staff required to deliver an activity, the unit cost of providing a service (e.g. cost per transaction), and turnaround times for responding to inquiries.

Typical approaches

Numerical data will often be extracted from existing databases or systems, such as financial management systems, human resource information management systems, project management systems and customer relationship management systems.

If the information cannot be extracted from internal systems, it may need to be collected from delivery partners, Commonwealth or inter-governmental partners (e.g. state and local governments) and external stakeholders. Ideally, this information will be collected through a well-developed and consistent process.

Relevant tools

Data mining typically requires:

  • expertise in database and management systems to design automated reports that extract performance information that can be easily manipulated and presented;
  • data analytics, data visualisation and statistical analysis tools that transform raw numerical data into useful performance information (such as trends and averages). Such tools are also useful in testing the quality of data (e.g. through analysis of statistical variation);
  • well-designed forms (e.g. web- or PDF-based) that allow accurate information to be collected cost-effectively.

Strengths

Data mining is typically most suitable for quantifying outputs (e.g. number of payments/transactions made) and efficiency of activities.

Well-maintained and stable longitudinal information sets can provide insights into the impact of an activity over time (e.g. a change in trends can indicate changes in environmental factors affecting performance).

This method can be relatively inexpensive if information already exists (e.g. in a management accounting database) and reporting can be automated.

Limitations

The information collected through data mining is not well suited to measuring the effectiveness of activities (especially when results are based on interdependent activities that deliver results on different timescales).

The data can be misleading when performance measures are poorly designed, ambiguous or not compared against a clear target.

The accuracy of data can also be compromised unless there are effective controls over the input of data and access to systems and data bases.

Benchmarking

Description

Benchmarking is a way of comparing the performance of activities against similar activities. It is often based on comparisons against a range of quantitative and qualitative information (e.g. cost and customer satisfaction).

Benchmarks for comparison are often constructed from extensive datasets for classes of activities (e.g. procurement, corporate functions, project management or service delivery).

Benchmarks can be expressed as averages (for numerical measures), percentiles (e.g. the top 10 percentile representing best practice) or common descriptions of well-performing activities (for qualitative measures).

Benchmarking of an activity over time can track progress towards achieving a predetermined quality of delivery.

The appropriateness of benchmarking a particular activity (or activities) will depend on the existence of (or ability to construct) a benchmarking dataset. Information that allows meaningful comparison of activities (i.e. ‘apples for apples’) may not be available. For example, activities may be sensitive and not reported publicly (e.g. national security activities) or the benchmarking data that is available may not fit with the circumstances of a Commonwealth entity (e.g. procurement in the private sector may not subject to the same requirements as apply in the public sector).

Typical approaches

Benchmarking will typically consist of collecting information using standard techniques such as extracting quantitative data from an entity’s management databases, collecting specific information internally, and seeking information from external stakeholders (or clients).

Once collected, information will be subject to some quality assurance and filtering to ensure it can be reliably compared against the benchmark dataset. Results will be presented in terms of variations (e.g. from the dataset average or top 10 percentile) for the different aspects of the activity being assessed (e.g. cost of IT).

Relevant tools

Benchmarking typically requires:

  • information collection tools, such as data extraction from existing databases and information collection forms;
  • expertise and software systems for analysing the data and benchmarking sets;
  • accessible datasets that can be used to construct benchmarks. Such datasets may be available commercially (e.g. on a subscription basis) or publicly (e.g. data published by organisations such as the Organisation for Economic Co-operation and Development).

Strengths

Information from benchmarking can be used to identify better practice, where comparisons are made against activities that are similar in nature.

This method can provide an indication of process improvement achieved if benchmarking is repeated periodically over time.

Limitations

The validity of the results will be reduced if there are significant differences between activities or processes being benchmarked or if meaningful adjustments cannot be made to reflect such differences.

Benchmarking can be costly and time-consuming (especially if access to benchmarking datasets is provided on a fee-for-service basis).

Surveys

Description

Surveys can be used to collect performance information from activity partners (e.g. other entities or state and local governments) and external stakeholders (e.g. clients and beneficiaries). This method is best used when the information sought has a subjective element (e.g. satisfaction with services or impact made) that depends on the opinions and perceptions of individuals (or specific interest groups).

Good quality surveys depend on choosing a sample group that is large enough to be representative of the group whose views are being assessed (e.g. customers) and cover the range of circumstances in which an activity is delivered.

Surveys can provide both quantitative and qualitative information.

Examples of quantitative data include customer satisfaction ratings or the percentage of stakeholders that consider an activity (or activities) is making a positive impact.

Qualitative data might include stakeholders’ written explanations of why they have a positive (or negative) opinion of an activity or views on how a service might be improved.

Typical approaches

Surveys can collect information through paper-based forms (e.g. provided to participants by mail or email), face-to-face interviews, online forms and telephone interviews.

Preliminary surveys may be followed up with further interviews with respondents (either all or a subset) to gain a deeper insight into initial responses.

Quantitative data will be analysed using the techniques applied in data mining (see above), including trend and statistical analysis. Qualitative data (e.g. written responses) are typically assessed to identify emerging themes, issues or controversies that influence perceptions of performance and effectiveness.

Relevant tools

Surveys typically require:

  • electronic forms (e.g. web- and PDF-based) to collect identical data from all survey participants (either completed by respondents and/or an interviewer);
  • custom spreadsheets and databases for collating and analysing responses;
  • qualitative analysis software that helps identify and map common themes in responses to survey questions. Such software is particularly useful when a large number of responses and/or detailed (lengthy) responses are expected.

Strengths

Surveys can provide insights beyond those available from purely quantitative methods (e.g. data mining), especially when the delivery of an activity needs to be closely aligned with the opinions and specific circumstances of target groups.

This method can provide longitudinal datasets if surveys are repeated over a statistically significant period and questions remain stable and relevant.

Limitations

Poorly designed surveys may be open to misinterpretation, particularly if questions are not specific enough, or the context (e.g. the interests and circumstances of respondents) in which responses are provided, is not carefully controlled for.

Analysis of qualitative information relies on the expertise and knowledge of assessors (particularly their understanding of the circumstances and possible bias of respondents).

Voluntary surveys often have low response rates, which can adversely affect the quality of the performance information produced and limit the ability to extrapolate the results to the entire population.

Peer reviews

Description

A peer review is a method for collecting performance information based on assessments by a group of experts or key stakeholders.

Peer reviews, conducted by suitably qualified persons, provide information on performance based on their knowledge and expertise concerning a specific activity (or activities). For example, IT experts may be asked to assess the delivery of a new software system in terms of cost efficiency, the quality of project management or the quality of the system delivered.

Reviews by key stakeholders would rate performance against expectations. Typically, stakeholders invited to participate would be those consulted during the development of an activity who provided advice on what needs were to be met, how and by whom. Key stakeholders may also include delivery partners (e.g. non-government organisations participating in the delivery of international aid programmes).

Like surveys, peer reviews can provide both quantitative and qualitative information. For example, reviewers may be asked to provide a numerical score for performance. Qualitative information may include subjective views on the effectiveness of the activity.

Typical approaches

Peer reviews might assess performance against specific criteria or terms of reference (e.g. cost, project management or stakeholder engagement). The chosen review group would provide an opinion based on their knowledge of the activity (or activities) being assessed. They may be provided with documentation (e.g. project plans, quantitative data or survey results) to support their assessment. Reviews may be conducted by interviewing the selected group (individually or as a group) or by asking for written responses to questions.

Reviews by key stakeholders would typically assess performance against specific criteria (e.g. impact or the quality of consultation). Information can be collected through interviews or by written response (depending on the questions being posed and the effort stakeholders are willing to contribute). Information can also be gathered through focus group–style sessions facilitated by a skilled moderator. Group sessions can help generate a degree of consensus on how key stakeholders view the performance of an activity.

Relevant tools

The main resources are the skills, experience and knowledge of the selected group from whom assessments are sought. Tools for identifying reviewers include professional networks, established stakeholder forums and senior public service networks.

Strengths

Well-selected ‘experts’ can provide an informed assessment of performance that takes into account their knowledge of similar or comparable activities.

Expert reviews will carry more weight if the persons consulted are well respected and widely recognised as experts or knowledgeable in their fields.

Stakeholder input can provide useful information about performance against expectations when information is not readily available through other means.

Limitations

Suitably qualified persons who are willing to commit time to reviewing an activity may be difficult to identify. The quality of performance information can be affected by a relatively small sample of opinions.

Experts may be biased towards particular ways of delivering activities that may, or may not, be relevant to the entity seeking their views. Such biases is likely to affect the quality of performance assessment and reduce objectivity.

Key stakeholders providing views on performance may have a limited understanding of the context in which a Commonwealth entity is required to deliver an activity, or may have developed unreasonable expectations of results to be delivered.

Evaluations[3]

Description

Evaluations are systematic assessments of the design, implementation and outcomes of an activity. Evaluations typically examine the significant elements that affect performance, and can generate both quantitative and qualitative information about the performance of an activity.

Evaluations can be conducted at different stages in a program or activity cycle, and can be aimed at assessing various aspects of a program or activity (e.g. how a program is implemented, the effectiveness of the program, whether a program provides value for money).

Decisions about when to evaluate performance will generally take into account the stage in the program or activity life cycle, timing of key decisions, cost and the evaluation requirements.

Evaluations aimed at assessing the long-term outcomes of activities are typically conducted once an activity has reached an appropriate level of maturity. Evaluations of completed activities are often used to demonstrate what was achieved with public resources, or to help inform decisions about the value of establishing similar activities.

Evaluations will often be conducted or supported by evaluation experts. Such experts have the skills to identify the sources of information required to address the performance criteria of interest. They will also bring an array of quantitative and qualitative analysis tools that can be applied to assess performance data and identify links (for example, with external factors) that provide a more holistic view of performance.

Typical approaches

An evaluation is often a significant undertaking and is usually managed as a discrete project with governance oversight and a dedicated project management team.

An evaluation will often be preceded by substantial planning, including discussions with senior managers and external stakeholders to establish the evaluation criteria and terms of reference. Other tasks include identifying the expertise required to conduct the evaluation and assembling an appropriate review team.

During a review, discussions with senior managers and stakeholders are likely to continue to ensure that the criteria and terms of reference are being met and to address any issues that arise as information is gathered and analysed. Depending on the extent and complexity of an evaluation, these discussions may occur through a reference group constituted from key managers and/or external stakeholders.

Results of an evaluation are often reported through a formal evaluation report. The report may be made publicly in full or in part (e.g. by reference to key findings) to satisfy accountability obligations.

Relevant tools

The tools relevant to a particular evaluation exercise will depend on the type of evaluation being conducted, the activities being reviewed and the performance criteria assessed. The evaluation practitioners selected to conduct an evaluation should be best placed to identify the tools required. They should also have the skills to use the tools and integrate results into a comprehensive view of performance.

Strengths

Evaluations are often the best (and sometimes only) way to assess the performance of complex activities – especially those that have a large number of interdependent elements delivered by multiple entities.

Data that has been collected through other means (e.g. data mining, surveys, peer reviews or benchmarking) can be used to inform a comprehensive evaluation.

Limitations

Evaluations that focus on the effectiveness and/or efficiency of an activity can be costly and time-consuming as they require the collection, analysis and synthesis of large amounts of information from diverse sources. They also generally require suitable performance information to be available.

Data sources, quality and management

  1. Stakeholders, including the Parliament and the public, should have confidence in the reliability of the reported data underpinning performance information. Particularly where the quality of data is subject to some limitations, users should be alerted to the level of confidence they can have in the accuracy of information and the implications for the level of performance reported.
  2. Data may be drawn from a variety of sources (e.g. administrative data, providers of government services, customer surveys, etc.) and entities should, as a matter of good practice, identify their sources of performance information.

[3] Some useful resources may include:

New Zealand Department of Prime Minister and Cabinet, “Making Sense of Evaluation”, available at: https://dpmc.govt.nz/publications/making-sense-evaluation-handbook-everyone; and HM Treasury (UK), “The Magenta Book”, available at: https://dpmc.govt.nz/publications/making-sense-evaluation-handbook-everyone

5. Performance reporting as a continuous process

  1. Corporate planning and performance reporting should be viewed by entities as a cycle of continuous review and improvement, rather than as an annual compliance process, with the review stage informing the planning of the next cycle.
  2. Performance information can be expected to evolve with experience, changing needs, and the availability of more relevant or more reliable information.
  3. The process of developing performance measures is usually an iterative one, involving a balancing act between using what might be ‘ideal’ information versus using information that is available, affordable and most appropriate to the particular circumstances.
  4. Entities should regularly review their performance information and systems for appropriateness, for example through internal and external audits to assess the effectiveness of their performance information system and to monitor the quality of their performance data.

Amending performance information

  1. Accountable authorities are encouraged to periodically review their entity’s performance information over time. Entities are likely to need to amend their performance information as they work to improve the performance information they publish and to reflect any material changes that affect the entity (e.g. machinery of government changes).
  2. The reporting vehicle used to make the change and the timing of the change depends on the circumstances. It is generally preferable that the corporate plan is used to report changes as it is the primary planning document for the entity.[4]
  3. If performance information has changed during or between annual reporting cycles, it is better practice to note that a change has occurred and why it has occurred. In the absence of an explanation of changes made, it is difficult if not impossible for a reader to understand how and why an entity’s performance information has evolved over time.

Audit committees and reporting

  1. The functions of an audit committee under the PGPA Act include reviewing the appropriateness of the accountable authority’s performance reporting for the entity.[5] This would include information provided in the Corporate Plan, the Portfolio Budget Statements and the Annual Performance Statements.
  2. For further information, refer to RMG 202: Audit committees.

[4] Section 16E of the PGPA Rule allows an accountable authority to vary a corporate plan.

[5] Section 17 of the PGPA Rule.

6. Record keeping

  1. In line with an accountable authority’s responsibility to properly record and explain the entity’s performance in achieving its purposes,[6] it is good practice for entities to maintain records that support and explain the approach or rationale for how an entity intends to measure and assess its purposes. This could be expected to include:
    • the rationale for the entity’s approach to the development of its suite of performance measures that demonstrates how the entity’s approach takes into account the context of the entity’s purposes or key activities (for example, why certain types of performance measures are included in the entity’s corporate plan and why it may not be appropriate at a particular point in time, or because of the nature of an entity’s key activities, that an emphasis is given to certain types of measures; see the discussion of sections16EA(d) to (e) above);
    • details of the data sources and methodologies to be used to measure the entity’s performance against each of its performance measures; and
    • where relevant, the entity’s approach to reviewing and its suite of performance information over time. 
  2. A discussion of an entity’s record keeping responsibilities relating to performance is also included in RMG 132: Corporate plans for Commonwealth entities.

[6] Section 37 of the PGPA Act. 

7. Appendix

Checklist for meeting requirements under the PGPA Rule

Requirement and related considerations

PGPA Rule reference

Met? Yes/No

Action proposed

Do all performance measures relate to the entity’s purposes or key activities?

Consider:

  • Is the relationship clear in each case?
  • Do any relationships need to be explained in the corporate plan?

 

Subsection 16EA(a)

 

 

Do all performance measures use sources of information and methodologies that are reliable and verifiable?

Consider:

  • Have supporting data sources or methodologies been identified, documented and agreed by management?
  • Has an assessment been made to confirm that data sources and methodologies are both reliable and able to be verified?
  • Has responsibility been assigned for the maintenance of data sources or the development of methodologies?

 

Subsection 16EA(b)

 

 

Do all performance measures provide an unbiased basis for the measurement and assessment of performance?

Consider:

  • The basis on which targets have been set;
  • Whether methodologies may result in the results being biased or perceived to be biased?
  • The adequacy of controls to prevent and detect the manipulation of data sources.

Subsection 16EA(c)

 

 

Does the entity’s suite of performance measures consist of an appropriate mix of qualitative and quantitative measures?

Consider:

  • Whether the nature of the entity’s key activities lend themselves to both quantitative and qualitative measurement;
  • The relative usefulness and cost of developing and reporting against both quantitative and qualitative measures; and
  • Whether the rationale for the mix of measures is properly documented.  

Subsection 16EA(d)

 

 

Does the suite of performance measures include measures of output, efficiency and effectiveness that are appropriate in the context of the entity’s purposes or key activities?

Consider:

  • Whether the nature of the entity’s key activities lend themselves to measuring outputs, effectiveness and efficiency;
  • Whether the mix of measures enables a complete assessment of the entity’s performance;
  • Whether the entity’s approach to measuring its performance is properly documented. For example, where an entity decides that output or efficiency or effectiveness measures are not appropriate in the context of key activities and purposes, whether that decision is documented.

Subsection 16EA(e)

 

 

Do the performance measures enable an assessment of performance over time?

Consider:

  • Whether the way performance is measured is consistent over time;
  • Whether the type of performance measures used reflect the different stages or maturity of implementation of programs or activities;
  • Whether the rationale for any changes to performance measures is documented and reported.

 

Subsection 16EA(f)

 

 

Is a target specified for each performance measure, where it is reasonably practicable to set a target?

Consider:

  • Is there a rational basis for each target? Does the corporate plan include details of these rationales where it will assist the reader to better understand the target(s)?
  • Is the entity satisfied that all targets are attainable and do not promote adverse results or perverse incentives?
  • Where it is not reasonably practicable to set a target, is the reason for this documented and reported?

 

Subsection 16E(2) table Item 5(b)

 

 


Did you find this content useful?