Saturday, June 25, 2016

10 factors 9: deliverables

9.    Deliverables. As each deliverable is complete, hand it formally over to your customer. Ask them to verify acceptance to make sure it meets their expectations. Only then can you consider each deliverable as 100% complete.
The 'deliverable' cycle starts with their identifcation in a work breakdown structure and the setting of their performance requirements, tradeoff criteria and acceptance rules. Without these, no one knows when they have a 'deliverable'.

The performance requirements are critical to understanding the job the deliverable will be required to perform. They have to be stated unambiguously, objectively and in a testable manner.

As far as scheduling goes,  a deliverable is not delivered until it is accepted. Thus, the deliverable cycle has to  milestone the delivery, following project-internal acceptance testing against the stated and agreed criteria, then identify customer acceptance testing, which might take some time. If you cannot point to the event of handing over a completed deliverable for acceptance testing, then it becomes very hard to point to the customer acceptance activities as being the source of delay.

Friday, June 17, 2016

Value

I recently hopped onto the IVMA website, suggested by a colleague, to look at the content added by Roy Barton on 'value'. Appropriate for an organisation that seeks to manage value.

It is interesting to consider the construction industry view of value as a benefit between options, and a more financial or economic approach.

When it comes to assessing 'value for money' in my procurement and construction roles, I've heard much said, but often little that advanced the cause. Happily, Roy's work is a welcome clarification on what constitutes value for money.

Value for money is not an absolute concept, but a comparative one, created on the basis of market valuation of relevant factors (and here concepts used in cost-benefit analysis come to mind).

The benefits of an investment, often assessed qualitatively in asset projects (at least at the level of the asset team), need to be quantified, as do the costs to the owner and user/s (in true CBA style). Then we've got input to a comparative value for money assessment.

What we are seeking is an opportunity cost comparison based on estimates of costs and quantified ($-based) benefits from a project, running over the project life (or a reasonable period that allows for comparison). Factored into this, if we are being rigorous, is pricing of (real) options into the future for opportunities to take up activities that will produce a meaningful return and drop off activities that fail to achieve a meaningful return.

The value for money consideration is what set of benefits to costs do we get for investment A compared to investment B or any other likely use of the money...a market comparator can always be helpful as an 'umpire' for the exercise. This would be a set of financial instruments with a similar risk to that of the project options before us.

There is no short-cut to assessing value for money; its all about what else could be done with the money: if it is an investment achieving a greater return (more benefits), then the VFM of the project in question drops. If it drops too much, then, irrespective of any theoretical NPV the project might have, it represents a nett cost: another use of the money would produce a greater benefit, and this is foregone. The converse also applies, of course.

Benefits can be assessed on the basis of costs imposed upon or taken off the owner or user: maintenance and other operating costs is the obvious first port of call, but costs imposed or taken off users are important too.

An example of this in a retail centre: I can reduce the users' cost/time in travel by adding cinemas to my retail centre; that will make the cinema part of the 'destination', making it more attractive than otherwise, and drive patronage. For the investor, the additional capital and operating costs provide for a greater return in higher patronage.

Is this value for money? Compared to stand alone cinemas some distance away; probably, and it can be measured on a comparative basis, and compared with an investment in shares in a similarly balanced portfolio of retail and cinema operators.

___

After I'd written this, I came across a very concise explanation of comparative value for money in a piece of draft legislation I reviewed:

"x represents value for money in that the costs of the services are reasonable, relative to both the benefits achieved and the cost of alternative equivalent benefit producing services"

This references the market for price-benefit setting, takes into consideration the opportunity cost of a service and the benefits that accrue to the user.

Monday, June 13, 2016

What is a "project'?

I've read lots of definitions of 'project'; so have you, I'm sure, but I've never been content with them: mostly they are boring statements of the obvious, along the lines of 'a project is a temporary effort with a start and a finish that...'

Currently reading Shenhar's Reinventing Project Management, I came to my own take: the deployment of resources to profitably change an organisation's capability. The organisation might be the project sponsor, or a client.

The core of a project is that a change or opportunity (which arises inevitably from a change) in the (business) environment creates the ground for deployment of resources to stabilise the organisation's interaction with the environment-now-changed (or about to be changed) in line with the organisation's strategy or mission, which itself may be changed by the project.

Projects provide capability, operations use it.

Tuesday, June 7, 2016

Which one? #2

I've criticised the common weighted scoring method of evaluating proposals as a type of voodoo: a ceremony that appears to be attached to reality, but, in reality, is not. It is so unprotected against common cognitive biases and misaprehensions that it may as well be 'voodoo'.

What to do, then?

Rather than subjective scoring and thinking that this produces meaning, let alone numbers that are able to be manipulated mathematically, a system of ranking by measures of actual criteria achieved, then setting hurdles for a stage 1 evaluation against requirement domains is more likely to be accurate.

The Stage 1 evaluation grid looks like this:
 The 'hurdle' is the minimum rank that must be achieved to be satisfactory, or 'in consideration'. I've set that at least half the domains must jump the hurdle for the proposal to proceed to the next stage.

The ranks are achieved objectively; for instance, under 'compliance' the proposal might need to contain satisfactory information about: board overview, corporate governance systems, means of complying with WHS requirements, means of ensuring compliance with local government approval conditions, method of ensuring sound procurement of sub-contractors, how relevant board and executive committees are employed in respect of this project, how legal and procedural obligations under the contract are met: that's at total of 7 areas of interest (in reality there should be quite a few more specified). Count the satisfactory ones, and give a rating.

Proposal 'b' has a rating of 2: it meets less than 81% and more than 60% of requirements:  that is, 5 items are satisfactory. This is a binary choice, no 'grades'; an item is either in or out.

Stage 2

At stage 2 we get serious. It would be rare for any of the contenders to be over the hurdle in all domains, so we work with only those who meet the 'clearance count': the count of number of domains for further consideration. The degree of criticality of the domain for project success is reflected in the hurdle rating.

Further consideration should be a probabilistic evaluation of the value (expected value) that the owner would obtain from the proposal.

Let's say the proposal 'b' is set at $10m. In the domain of 'urgency' for example, we see that the contractor will deploy on site much later than expected, increasing the risk of an overrun by, say 10%. The owner will lose $10k per day. The effective loss is therefore $1k per day...and so on.

This approach will give calulated and examinable numbers related to value produced.

In 'soft' projects, a similar approach would apply, but the estimating environment and related calculations would need to be handled differently.


Saturday, June 4, 2016

Which one? #1

If you have been involved in any large project procurements, I'm sure you've encountered the use of weighted scale ranking of proposals.

These attempt to bring objectivity to the evaluation of proposals while dealing with a large amount of diverse information in the proposal.

They usually end up in a matrix such as:
I've grabbed this from an evaluation of an IT system.

There are numerous problems with the approach:

In most cases the weights are arbitrary: unscaled, uncalibrated and without repeatable reference to the real world. There is also scale compression for low weighted scores vs high weighted, misleading scorers as to their scores. They are nothing like a rating for uni grades where a score is weighted by the proportion of the academic program that the course represents.

The score can be sensitive to low weight 'herding': a number of high scores on low weight factors can overwhelm a high score on a high weight factor, such as is illustrated below.

Factor 'value' for proposal 'b' scores '30' for 'value', presumably an important factor, but is the proposal with the lowest score for this factor. It is nevertheless catapulted to the highest score by high scores in some low weight factors.

One could argue that the weightings as  a whole deal with this in terms of the objective of the project, but without evidence that the problems of scaling, calibration and variability arising from arbitrary assignment, the vulnerability of the scheme to even inadvertent manipulation or outright error is unaddressed. I wonder how many procurements of large projects have gone off the rail due to this approach?

There are a few ways of overcoming this. I will address them in the next post on this topic.