Saturday, October 29, 2016

Project Success

I recently presented on Achieving Project Success.

Most discussion on this topic that I've heard wanders around matters related to requirements, definition and budget.

But there's more. Sure those topics are important, and neglected will frustrate success, but they are not sufficient.

Others discuss project processes: particularly relying on the setting of acceptance criteria at relevant points in a project: for deliverables, but also for completion of capability elements.

Again, neglect here will frustrate success, while attention itself is insufficient.

My talk covered three topics, some absorb the matters above, but I took a more global project view:

Project Success Baseline
  • Objective and requirements set out in sufficient detail.
  • Contract distributes risk on basis of control and avoids risk switchback during construction and concession phases. Risk allocation adjusted for phase based role changes between the parties.
  • Contract sets out governance between parties, performance monitoring and control provisions.
  • Principal recourses include step-in rights, performance bonds, contracted dispute procedure, termination.
  • Suitably experienced contractor and management for type and size of project.
  • Suitable and adequate financing structure for contracts used (such as construction finance and operational payments).
  • Cooperative contracting approach with regular project ‘conferences’ to maintain tempo and performance.
Project Governance
  • Project Board (senior reps of primary parties, Principal is chair) to overview performance and approve ‘stage gate’ progress. Meets at and between stage gates.
  • Project Control Group (reps of delivery parties, Project Director is chair) meets at least monthly to review progress, risk rolling wave (risk retirement, treatment, and emergence) satisfaction of milestone and phase acceptance criteria.
  • Constant review of project performance climate: risk events, communication and cooperation between parties, critical stakeholder and participant views against project conduct criteria.
Project Control
  • Earned value reporting on pre-concession phases, with forecast of milestone achieved dates (to identify slippage) and corrective actions to be taken.
  • Tracking of sub-contacting commitments and performance
  • Review of progress and performance by Principal, SPV and financiers, reported to Principal.
  • Value reviews to ensure appropriate investment in performance of product and implications for concession period operations.
  • Principal retains visibility of sub-contractor procurement methodology and performance.
  • Periodic reviews of deliver project management performance.
  • Project level relationship between Principal and providers maintained through formal and informal reviews of performance.

Sunday, October 23, 2016

Risk Criticaility

We've all been down the risk ceremony path: where risk management starts with a 'workshop', descends into a matrix, then disappears.

Risk has a number of dimensions and Shenhar's book is a great start to thinking about risk in an organised manner; after Shenhar risk should be assesed in terms of the vulnerability of dependencies to failure events (and failure modes become important) on a probabilistic basis. These should then be assessed for affect on schedule, investment and performance to produce actions that will mitigate if not avoid the risk.

You probably know the near-pointless and potentially misleading 'matrix' that both Eight to Late and Cox bubble prick.

The outcome of basing project management on a mature understanding of risk should be the criticality of events to completion, budget or technical performance. This then drives mitigating actions: abatement and avoidance, or if minor, ignoring (or buffering in schedule or budget).

Wednesday, October 5, 2016

Shenhar's project taxonomy

He calls it 'the diamond approach' to project understanding, and it has great ideas: at last, some thinking that lifts project thinking to the level of a workable theory, rather than just a set of practices (and therefore a 'craft').

I refer to Shenhar and Dvir's Reinventing Project Management, published by Harvard Business School Press.

Four dimensions are identified: Technology, Novelty, Pace, Complexity. There are four gradation steps for Technology and Pace, and three for the other two. There should be four for them too, which I'll suggest below.

Some of the terminology is less than precise and could be improved, but I guess it leans to a simple working project language to make its point.

For instance, take Technology: we range from 'low-tech' to 'super-high-tech', with parameters for classification brushed in the broadest strokes.

This is insufficiently objective, and different domains will understand terms differently.

'Low-tech' would indicate that components and production techniques for the finished product are predominantly conventional off the shelf items and/or require engineering that is ubiquitously available in the relevant market. 'Super-high-tech' on the other hand would be a product that relies on experimental development of unique processes or products not available anywhere in any domain.

The experimental definition identifies the risk to schedule, budget and performance instantly. Quanitification has its own problems, but historic cost escalation could give some leads: for example, Sydney Opera House, the Lockheed Lightning military aircraft (distinct from the English Electric Lighning of times past: a faster plane by far), The Apollo program, Polaris missile development...the list is long.
 
Similarly for Pace, which  runs from 'standard' to 'blitz'. These can be defined by additional investment to shorten delivery time. So 'standard' is the economical pace imposing no opportunity cost penalty on the promoter. 'Blitz' would be a pace that requires resources and working methods that represent a potential (and significant) opportunity cost penalty to a promoter. Sometimes this can be offset by a 'time to market' benefit, but not always.

Novelty needs a first step, prior to 'derivative'. Derivative implies that work is based on but not identicial to other recent examples. In the building industry this doesn't work. The prior step would be 'Repeated'. Many commerical buildings, most factories and most houses are 'repeated' projects. The large proportion of materials, techniques and skills required are identical to those required by the previous project. Nothing new but configuration and sometimes site conditions.

Similarly for 'complexity'. This dimension is quite difficult, but the there steps listed are adequately defined to be useful. I would add 'adapted' between 'assembly' and 'system'. Take a typical factory unit development. Mostly these are an assembly of known items by known methods, and represent a 'repeated' project. However, a factory for a specialist application (I think of a large printing works that I was involved in), which required fine-tuning to equipment and process, including the use of AGVs and automated paper warehousing and 'publishing' equipment, while not a 'system' in Shenhar's sense, was more than a mere assembly. The way things were brought together led to responsive interactions between 'assembled' components that amounted to an adaption that lifted the level of complexity.

The parameter that measures this dimension is the relationships between 'systems'. An assembly is work within, essentially, a single delivery or industrial system, adaptation is known systems in a novel or rare (to the project team) interacting arrangement. Shenhar's 'systems' represent a unique configuration of subsystems and require (some) specialised (not ubiquitous) processes to enable the product to meet its performance requirements and allow the customer to achive the mission for the product. Array is the most complex and has interacting unique systems or configurations.

Thursday, September 8, 2016

Change failures I've seen

Change fails when:

  • current delivery systems or system interfaces are insufficiently known or understood
  • systems are not studied for change opportunities or responses
  • where the work of change is on separate projects that deliver lovely pieces of paper, but no renewal, renovation or abandonment of systems.

It also fails, or is hampered by:

  • under-resourcing
  • neglect of subject matter experts (who probably know more of issues and opportunities than any consultant will imagine), and
  • no communication at a sufficient level of operational detail to be meaningful in the real world of productive systems.


An example.

I once worked for a large corporation where we executives trotted off to a very expensive conference, in a high price venue to dream up change projects. A bunch or 'projects' came out of it, of course.

Not about places in our current systems that we could look for change and refinement, but work in the parallel universe of pieces of paper...nothing happened after our huge investment: investment wasted.

Although I did learn how to use data validation lists in Excel...a very expensive training.

Better would have been a day discussing Deming's work (some similar views at Curious Cat although this tends to be a bit 'tooly') and how we could reform our business in productive response. Alas.

Monday, September 5, 2016

5 Ws, 2 Whos and a How

Change management has grown a mantle of mythology in business (I think of Kotter's 'panic first' method and Prosci's ADKAR top-down method: which is not so bad, but seems to relegate systems to psychological affect), but sensibly, change management is a special application of project or program management. In project management we lead people to use systems to achieve productive outcomes that usually change something: often resulting in improved capabilities.

Thus: change is guided by a set of questions I label as

5 Ws, 2 Whos and a How
  1. Why - is change needed? Stimulus arrives from the environment; that is such things as the market or competition, or the desire to change capability, release resources for other purposes, etc.
  2. What - is to be changed? Systems usually, administration, delivery, staffing, production, financial. Defining what is to be changed is essential to success, just as defining requirements is essential to project success. It comes down to the capability to be produced by the change and what is needed to deliver the capability.
  3. Who - is affected by the change? Once we know what needs to change, we know who is affected. Both insiders and outsiders: manage both, engage both, utilise the knowledge of both. The insiders (staff, shareholders) are affected because they will benefit or loose. The outsiders are affected because they are customers (or suppliers, or government).
  4. When - will the change occur, start and finish? Timing in business is everything. Coordinating multiple activities (sounds like a project) is essential for all change efforts. Coordination implies communication: all affected parties need information to guide their action as part of the change (that is they, as beneficiaries, or even losers) need to be informed so that they can act.
  5. How - will the change be conducted and concluded, what tells us that change has been accomplished? Typically change is carried by system changes. Failed change usually results from failure to change the work systems and therefore the working assumptions and presumptions, thus the culture, and ends up with systems and aspiration in conflict producing waste. Failure is promoted by failed communications, inattention to systems and lack of knowledge about system and change implications.
  6. Who 2 - will conduct, participate in and bring the change? Following John Seddon, change is best produced by those who work in the system to be changed, to devise a better system to serve customers (everyone has a customer: this is whom the effort of a unit or system is to benefit either proximately or remotely), and manage sub-system interfaces. Isolate change in a 'change agent', or start with Kotter's 'panic', or Prosci's top-down psycho-patronising, and you cut off the potential biggest allies and the best placed change producers. Veneered change is faulty change; embodied change turns the system to meet its new capability requirement, with a deep understanding of current capability and the risks that change will evoke, guided and operated by people who are involved and therefore care.
As a program, change relies upon coordination between subsystem leaders: coordination is not merely reporting some metrics that pretend to show progress, but is a two way street of sharing, insight and opportunity making, one hopes, coming from an active (temporary) 'community of production'.



Monday, August 29, 2016

The triple constraint

I used to disparage the triple constraint: I thought it simplistic and unhelpful.

The three constraints (triple, because they interact) are illustrated below.
 
More recently, this has been elaborated into a double triangle: attempting to be all things to all people describing project factors, rather than the constraints.

The elaboration is unnecessary. Risk, resources and scope go to budget, budget and scope go to schedule, quality goes to scope. There are probably other schemes, but they end up in the triple constraint quite quickly.

Risk as a factor is interesting...its like putting 'doing your job' as a factor. Risk is assessed then used to guide the setting of budget and schedule against scope. Scope is quality.






I like Glen Alleman's much more helpful diagram, below. He introduces the concept of 'technical performance measure. Much more meaningful then either 'quality' (as if that means anything), or scope.

















My preference is for a modified triple constraint, that is more attuned to the business environment and the realities of projects where a project exists to produce a capability that has value for the sponsor.
























I've adapted Alleman's TPM to be 'objective', although 'performance' as the out turn result of the project is what we are after. Fussing around with 'quality' as the centre of the triangle does nothing for anyone; what we need to focus on is the value produced (to meet a certain performance by a certain time to produce value from the investment: think Net Present Value).

The money is invested; its not just a cost, but the investment has to be carefully managed to ensure that the sought return achieves the necessary value. We don't manage 'time'. Alleman is right, that we manage schedule, but I prefer to use delivery, as this focuses attention on the project doing what it is meant to do: deliver value.

Friday, August 26, 2016

Stoplights

After gantt charts, the next favourite topic of communicative techniques is the project 'stop light' chart. Three colours (red, amber, green) to give a 'high level' view of project performance. However, they are criticised.

On a real project (any decent sized construction project), I've never seen anything as puerile as a stop light pretend chart. I've seen detailed status reports.

There are only two main parameters for projects (once we've established the sought performance of the product): out-turn cost (EAC), and date of (defect free) completion. One could also report drift in the parameters, and if there are unmade decisions creating schedule or cost risk, the EAC and DFC date could have probability ranges attached. Not that a board will pay much attention to the implications of either: best convert them to a date range and a cost range.

On the other hand, some people seem to love RAG status reports. Maybe because they remind them of lollypops...I can't think of a grown-up's reason to like them.

Tuesday, August 23, 2016

Project Graphics

In a post some time ago on the Tufte forum, ET criticises the information sparseness of typical PM gantt charts, and their derivatives. I'd agree with him, but wonder what to do as an alternative. I've played with a few alternatives. I like the medical chart work; something like this could be effective; but I'm not sure how. Projects can be too complex. I think of a $200m retail complex with cinemas, parking, public facilities...let's say 1,500 high level tasks.

This topic has been explored by some others. One ending up with a vertical gantt chart. I don't see how this is an improvement, when people intuitively read a gantt chart left to right. A variation on this seeks to use the thickness of the gantt bars to carry information.

Based on this idea, I thought to add information numerically (thus, not quite a graphic development).

The baseline (or Original Target) is the line with diamond ends, the current forecast/actual (CF/A) is the hollow bar, and progress (PAD -- production at date) is indicated by the coloured solid bar. This is an earned value progress: the schedule performance index. If less than 1 (under-performing) the bar is amber, and lags the report date, representing as shown a SPI of 0.9: 90% of the distance between the start line and the report line.
If greater than 1, as on the lower bar (SPI of 1.2), and the activity is 'over' performing, the bar is green. The numbers surrounding are self evident on inspection.

A development of this is to strip the numbers off the bars, as below.

Here the activity name is above the 'action' bar. A blue solid bar indicates a completed task, and the black thick vertical end line represents the completion date. Early completion is shown in Activity name 3, and late completion in Activity name 4. As can be seen, Activity name 1 is substantially behind schedule, and Activity name 2 reasonably ahead of schedule.

Gauging progress is always problematic, unless the activity is easily measured: laying bricks or pouring concrete. Much harder for intangible activities, like design. Here I break the activity in easily assessed sub-activities, and measure those on a 0-100 basis. For design this might break into: requirements confirmed, preliminary options completed, preliminary options presented, preferred option identified (as a milestone) and so on. Easier to estimate this as well; and even easier if there is relevant 'reference class' historic data (and more at the Transportist).

The only drawback with my concept is that one must be using EVM to track production. But, that's a good thing.

Thursday, July 28, 2016

Leadership

Leadership is the practice of providing to people an environment where they can sustainably and beneficially explore capability, opportunity and enquiry in pursuit of mission.

Monday, July 25, 2016

10 factors 10: the team


10. Your team. Be a great people manager. Show them the project vision and how they can make it happen. Motivate them. Trust and believe in them. Make them feel valued. They will work wonders.
No manager 'motivates' people. People motivate themselves. Unfortunately if the manager is not careful he or she will end up DE-motivating people. Dangerous!

Nor do you 'make' people feel valued. If you conceptualise the project team as a community of adults intent on a productive outcome, then you will inevitably value people's expertise and they in turn, being valued, will deliver commitment.

Monday, July 4, 2016

Doing it right

In Deming and disease, I wrote of an example of the typical management mish-mash often seen in business.

Demings recommendations may appear to be recondite to some, so examples?

One that that touches on the topic is: Seddon's "Systems thinking, lean production and action learning" in Action Learning Research and Practice 2007 v. 4 n. 1.

John Seddon practices systems work in organisations through Vanguard Consulting in the UK.

No commercial relation; I just think that his philosophy of the people doing the work are best placed to understand the system they work in is the right one.



Sunday, July 3, 2016

Adding up for recruitment

I was recently involved in a recruitment where we scored the candidates against a number of criteria, then added the scores to determine the preferred candidate.

I wondered about this in terms of my post on scoring in procurement selections.

In the recruitment, did we rank candidates, in a Likert scale fashion, or did we give them a score in school quiz fashion.

If it was a quiz score, then adding might have been valid...if the scores were on the same base, but if Likert ranks, then we should have counted the number at each rank and compared them.

This would be closer to the approach I discussed, where a large number of criteria are established against which binary success-failure is determined. We then count the number of successes, and that person becomes the preferred one.

Deming and disease

I attended a talk recently where the speaker told us that she had just executed a very unpopular restructure in her firm, and that she was about to embark on an equally unpopular session of 'performance appraisals' that involved fitting people's performance to a bell curve.

Her c.v. in the conference papers mentioned that she had consulted using Deming's 'TQM'.

What a mish mash of conflicting and half-understood ideas.

TQM, at best, is an attempt to mechanise Deming, rather than to adopt his management philosophy.

His philosophy is encapsualted in his "System of Profound Knowledge"


Implementation is guided by his '14 points' to avoid or overcome the '7 diseases of management'.

The speaker did not appear to be familiar with any.

Summarising:

The 14 points

1. Create constancy of purpose toward improvement of product and service, with the aim to become competitive and to stay in business, and to provide jobs.
2. Adopt the new philosophy. We are in a new economic age. Western management must awaken to the challenge, must learn their responsibilities, and take on leadership for change.
3. Cease dependence on inspection to achieve quality. Eliminate the need for inspection on a mass basis by building quality into the product in the first place.  
4. End the practice of awarding business on the basis of price tag. Instead, minimize total cost. Move toward a single supplier for any one item, on a long-term relationship of loyalty and trust.
5. Improve constantly and forever the system of production and service, to improve quality and productivity, and thus constantly decrease costs.
6. Institute training on the job.
7. Institute leadership (see Point 12 and Ch. 8). The aim of supervision should be to help people and machines and gadgets to do a better job. Supervision of management is in need of overhaul, as well as supervision of production workers.
8. Drive out fear, so that everyone may work effectively for the company (see Ch. 3).
9. Break down barriers between departments. People in research, design, sales, and production must work as a team, to foresee problems of production and in use that may be encountered with the product or service.
10. Eliminate slogans, exhortations, and targets for the work force asking for zero defects and new levels of productivity. Such exhortations only create adversarial relationships, as the bulk of the causes of low quality and low productivity belong to the system and thus lie beyond the power of the work force.
  • Eliminate work standards (quotas) on the factory floor. Substitute leadership.
  • Eliminate management by objective. Eliminate management by numbers, numerical goals. Substitute leadership.
11. Remove barriers that rob the hourly worker of his right to pride of workmanship. The responsibility of supervisors must be changed from sheer numbers to quality.
12. Remove barriers that rob people in management and in engineering of their right to pride of workmanship. This means, inter alia, abolishment of the annual or merit rating and of management by objective (see Ch. 3).
13. Institute a vigorous program of education and self-improvement.
14. Put everybody in the company to work to accomplish the transformation. The transformation is everybody's job.

And, the 7 Deadly Diseases

 1. Lack of constancy of purpose to plan product and service that will have a market and keep the company in business, and provide jobs.
2. Emphasis on short-term profits: short-term thinking (just the opposite from constancy of purpose to stay in business), fed by fear of unfriendly takeover, and by push from bankers and owners for dividends.
3. Evaluation of performance, merit rating, or annual review.
4. Mobility of management; job hopping.
5. Management by use only of visible figures, with little or no consideration of figures that are unknown or unknowable.

6. Excessive medical costs.
7. Excessive costs of liability, swelled by lawyers that work on contingency fees

The last two less of a problem in Australia, but the last is worth keeping an eye on.

Point 10 and Disease 3 appear to have been lost on the speaker.

Part of her problem was a mechanistic top-down and disempowering theory of leadership: the popular theory.

Opposed to this is the view that Mintzberg espouses and calls 'communityship'; this is more reflective of how adults work together to be productive, creative and committed to a mission. The manager's job in terms of 'leadership' is to create for people an environment where this is sustained, and ensure the flow of resources and information to enable it to happen.

In general management, and so in project management as well.

Saturday, June 25, 2016

10 factors 9: deliverables

9.    Deliverables. As each deliverable is complete, hand it formally over to your customer. Ask them to verify acceptance to make sure it meets their expectations. Only then can you consider each deliverable as 100% complete.
The 'deliverable' cycle starts with their identifcation in a work breakdown structure and the setting of their performance requirements, tradeoff criteria and acceptance rules. Without these, no one knows when they have a 'deliverable'.

The performance requirements are critical to understanding the job the deliverable will be required to perform. They have to be stated unambiguously, objectively and in a testable manner.

As far as scheduling goes,  a deliverable is not delivered until it is accepted. Thus, the deliverable cycle has to  milestone the delivery, following project-internal acceptance testing against the stated and agreed criteria, then identify customer acceptance testing, which might take some time. If you cannot point to the event of handing over a completed deliverable for acceptance testing, then it becomes very hard to point to the customer acceptance activities as being the source of delay.

Friday, June 17, 2016

Value

I recently hopped onto the IVMA website, suggested by a colleague, to look at the content added by Roy Barton on 'value'. Appropriate for an organisation that seeks to manage value.

It is interesting to consider the construction industry view of value as a benefit between options, and a more financial or economic approach.

When it comes to assessing 'value for money' in my procurement and construction roles, I've heard much said, but often little that advanced the cause. Happily, Roy's work is a welcome clarification on what constitutes value for money.

Value for money is not an absolute concept, but a comparative one, created on the basis of market valuation of relevant factors (and here concepts used in cost-benefit analysis come to mind).

The benefits of an investment, often assessed qualitatively in asset projects (at least at the level of the asset team), need to be quantified, as do the costs to the owner and user/s (in true CBA style). Then we've got input to a comparative value for money assessment.

What we are seeking is an opportunity cost comparison based on estimates of costs and quantified ($-based) benefits from a project, running over the project life (or a reasonable period that allows for comparison). Factored into this, if we are being rigorous, is pricing of (real) options into the future for opportunities to take up activities that will produce a meaningful return and drop off activities that fail to achieve a meaningful return.

The value for money consideration is what set of benefits to costs do we get for investment A compared to investment B or any other likely use of the money...a market comparator can always be helpful as an 'umpire' for the exercise. This would be a set of financial instruments with a similar risk to that of the project options before us.

There is no short-cut to assessing value for money; its all about what else could be done with the money: if it is an investment achieving a greater return (more benefits), then the VFM of the project in question drops. If it drops too much, then, irrespective of any theoretical NPV the project might have, it represents a nett cost: another use of the money would produce a greater benefit, and this is foregone. The converse also applies, of course.

Benefits can be assessed on the basis of costs imposed upon or taken off the owner or user: maintenance and other operating costs is the obvious first port of call, but costs imposed or taken off users are important too.

An example of this in a retail centre: I can reduce the users' cost/time in travel by adding cinemas to my retail centre; that will make the cinema part of the 'destination', making it more attractive than otherwise, and drive patronage. For the investor, the additional capital and operating costs provide for a greater return in higher patronage.

Is this value for money? Compared to stand alone cinemas some distance away; probably, and it can be measured on a comparative basis, and compared with an investment in shares in a similarly balanced portfolio of retail and cinema operators.

___

After I'd written this, I came across a very concise explanation of comparative value for money in a piece of draft legislation I reviewed:

"x represents value for money in that the costs of the services are reasonable, relative to both the benefits achieved and the cost of alternative equivalent benefit producing services"

This references the market for price-benefit setting, takes into consideration the opportunity cost of a service and the benefits that accrue to the user.

Monday, June 13, 2016

What is a "project'?

I've read lots of definitions of 'project'; so have you, I'm sure, but I've never been content with them: mostly they are boring statements of the obvious, along the lines of 'a project is a temporary effort with a start and a finish that...'

Currently reading Shenhar's Reinventing Project Management, I came to my own take: the deployment of resources to profitably change an organisation's capability. The organisation might be the project sponsor, or a client.

The core of a project is that a change or opportunity (which arises inevitably from a change) in the (business) environment creates the ground for deployment of resources to stabilise the organisation's interaction with the environment-now-changed (or about to be changed) in line with the organisation's strategy or mission, which itself may be changed by the project.

Projects provide capability, operations use it.

Tuesday, June 7, 2016

Which one? #2

I've criticised the common weighted scoring method of evaluating proposals as a type of voodoo: a ceremony that appears to be attached to reality, but, in reality, is not. It is so unprotected against common cognitive biases and misaprehensions that it may as well be 'voodoo'.

What to do, then?

Rather than subjective scoring and thinking that this produces meaning, let alone numbers that are able to be manipulated mathematically, a system of ranking by measures of actual criteria achieved, then setting hurdles for a stage 1 evaluation against requirement domains is more likely to be accurate.

The Stage 1 evaluation grid looks like this:
 The 'hurdle' is the minimum rank that must be achieved to be satisfactory, or 'in consideration'. I've set that at least half the domains must jump the hurdle for the proposal to proceed to the next stage.

The ranks are achieved objectively; for instance, under 'compliance' the proposal might need to contain satisfactory information about: board overview, corporate governance systems, means of complying with WHS requirements, means of ensuring compliance with local government approval conditions, method of ensuring sound procurement of sub-contractors, how relevant board and executive committees are employed in respect of this project, how legal and procedural obligations under the contract are met: that's at total of 7 areas of interest (in reality there should be quite a few more specified). Count the satisfactory ones, and give a rating.

Proposal 'b' has a rating of 2: it meets less than 81% and more than 60% of requirements:  that is, 5 items are satisfactory. This is a binary choice, no 'grades'; an item is either in or out.

Stage 2

At stage 2 we get serious. It would be rare for any of the contenders to be over the hurdle in all domains, so we work with only those who meet the 'clearance count': the count of number of domains for further consideration. The degree of criticality of the domain for project success is reflected in the hurdle rating.

Further consideration should be a probabilistic evaluation of the value (expected value) that the owner would obtain from the proposal.

Let's say the proposal 'b' is set at $10m. In the domain of 'urgency' for example, we see that the contractor will deploy on site much later than expected, increasing the risk of an overrun by, say 10%. The owner will lose $10k per day. The effective loss is therefore $1k per day...and so on.

This approach will give calulated and examinable numbers related to value produced.

In 'soft' projects, a similar approach would apply, but the estimating environment and related calculations would need to be handled differently.


Saturday, June 4, 2016

Which one? #1

If you have been involved in any large project procurements, I'm sure you've encountered the use of weighted scale ranking of proposals.

These attempt to bring objectivity to the evaluation of proposals while dealing with a large amount of diverse information in the proposal.

They usually end up in a matrix such as:
I've grabbed this from an evaluation of an IT system.

There are numerous problems with the approach:

In most cases the weights are arbitrary: unscaled, uncalibrated and without repeatable reference to the real world. There is also scale compression for low weighted scores vs high weighted, misleading scorers as to their scores. They are nothing like a rating for uni grades where a score is weighted by the proportion of the academic program that the course represents.

The score can be sensitive to low weight 'herding': a number of high scores on low weight factors can overwhelm a high score on a high weight factor, such as is illustrated below.

Factor 'value' for proposal 'b' scores '30' for 'value', presumably an important factor, but is the proposal with the lowest score for this factor. It is nevertheless catapulted to the highest score by high scores in some low weight factors.

One could argue that the weightings as  a whole deal with this in terms of the objective of the project, but without evidence that the problems of scaling, calibration and variability arising from arbitrary assignment, the vulnerability of the scheme to even inadvertent manipulation or outright error is unaddressed. I wonder how many procurements of large projects have gone off the rail due to this approach?

There are a few ways of overcoming this. I will address them in the next post on this topic.

Wednesday, May 25, 2016

10 factors 8: risks

8.    Risks. Risk management is a great proactive way to solve potential problems before they occur. Identify risks early in the project and continue to manage risks throughout the project.

Tim Lister is sort of famous for his statement that risk management is how grown-ups manage projects. What does this mean?

Let's consider a major project risk: the delivery team will not be capable of achieving the project outturn performance (performance of the completed project output). We manage this risk by bringing people with the appropriate skills and experience onto the team.

A major risk is that there will be cost and time over-runs. How do we manage those?

Using 'reference class forecasting' we assess the liklihood of failure along the dependency lines of particular work package items. We then time the  related activities, with suitable coordination buffers, to allow for risk; so an activity might be timed all going well to take 2 weeks. But we know all does not go well, based on history, so, to achieve a 95% probability of performance, the activity is  provided a period of 3 weeks to complete. The project accommodates the 15%  at risk in a buffer.

We do not do risk management by sitting in a circle dreaming up risks and allotting them 'probability of occurance and magnitude of damage' to do a sum to produce a 'risk rating'. That's bumbledom. The great icon of risk bumbledom is a 'risk matrix. Cox has dealt with these. Others have also commented.

Monday, April 25, 2016

10 factors 7: issues

7.    Issues. Jump on issues as soon as they are identified. Prioritize and resolve them before they impact on your project. Take pride in keeping issues to a minimum.

Minor matters that the PM and his or her team must deal with pop up ALL THE TIME on any active project of non-trivial scope.

People usually end up with an 'issues log' sitting in an Excel spreadsheet. I advise against this. Excel is far from bullet proof and using it for mission critical information is courts disaster, even if you have a rigorous nightly redundant back up plan.

Using a bug-tracking tool from our IT developer friends might be a solution, or a custom built Access database  could be suitable. You could, of course, go the whole way and build a project information management system in Access that ties issues, risks and actions to project elements, work items, contributors (suppliers, partners, owners, approvers, etc.) and deliverables.

Examples are: Meridian Systems and, at the simpler end,  Project Perfect.

Prominent issues need to be tracked in production meetings and quickly updated. However, issues are only part of it. Decision items need to be tracked as well. What decision is required when and from whom. "Issues" aside, decision delays can be worse than an unresolved question on a project.

Friday, March 25, 2016

10 factors 6: quality

6.    Quality. Understand the expectations of your customer in terms of quality and put a plan in place to meet their expectations.
To borrow from Kant, who taught that  existence is not a predicate, I argue that 'quality' is not a predicate of projects. The correct predicate is 'performance'.

A customer seeks a certain level of performance, and possibly needs the PM's assistance to develop acceptance criteria for the performance sought.

Quality, care of the 'TQM' fad and its relations of past years, is not a thing separate from performance. As soon as it is treated as an overlay on production management failure is being courted because it is an 'add on' not an inherent part of an activity to achieve a performance outcome.

I also wonder at the functional difference between 'plan to meet their expectations' and 'put a plan in place...' The former is a very popular piece of clumsy writing that usually adds nothing to the simple present tense of the verb.

Saturday, March 5, 2016

5. Control the work in progress

The overall control system I use in practical terms is the production horizon: when will 'x' be finished, compared to when it needs to be finished.

A look ahead program will help discuss such questions and Earned Value Analysis will help tell you where you've been, but you need leading indicators of success as well.

Commitments is one such: have you committed dollars to agreed activities sufficient for their timely delivery (e.g. you've got the next sub-contractor signed with sufficient time for him to mobilise to start work in time.).


Thursday, February 25, 2016

10 factors 5: communication

5.    Communication. Make sure you keep everyone informed by providing the right information at the right time. Produce status reports and run regular team meetings.
Communication is more than just communication in projects, particularly the more complex projects.

The PM/PM office needs to have a very clear appreciation of the information flows and needs of the project, the sources and destinations of information, and what 'meta-information' to keep track of.

Regular systems of meetings are part of the deal, for sure, but they are only hubs in an extensive communication/information network that links the project to its environment, its own performance and its players.

Document management is an important part of this, with currency of documents being tracked to ensure all are using current information.

A project 'data dictionary' to maintain the project's set of definitions and information baseline are important.

If a building/construction project, than a 'BIM' system should be considered, with a system to catalogue and organise the huge flow of information that such projects need.

Friday, February 5, 2016

4. Build a one team one goal approach

If the goal is the overall project and its mission; then 'yes'. But local goals change from time to time. Usually they concerned with task completion, hand off and collaboration, but the goals intertwine. A project is not that neat that at the operational level there is one goal at any time.

Of course, scale might have something to do with this. On a $500m hospital project with the total project team running to dozens of firms and hundreds of people and a WBS of several thousand items, then many goals all the time!

Saturday, January 30, 2016

Balustrade height

I've always wondered about balustrade heights on tall buildings. I've been on apartment balconies that have what looks like a 900mm balustrade -- 20 stories up! This might be the minimum BCA height, and it might even be a reasonable height for a balcony a couple of metres above ground; but the consequences of a fall from height is fatal; the balustrades should be designed for safety, not at a comfortable hand height, as though they are access hand rails.

In retail projects I've been involved in the balustrades to internal voids were about 1200mm. Safer, but imagine an adult carrying a child, even at that height the child is entirely above the balustrade. Not good.

One of the craziest low balustrades I've seen is on top of the "Cheese Grater" in London. Over 200 metres above ground and the balustrade looks like its below the centre of gravity of the man on the right. A gust of wind could present a danger as could a moment of unsteadiness on the part of the man. Here the balustrade should be 1500mm.

Cheese Grater Balustrade

Monday, January 25, 2016

10 factors 4: duration

4.    Duration. Keep your delivery timeframes short and realistic. It is easier to be successful if your deadlines are shorter rather than longer. Split large projects into "mini-projects" if possible. Keep each mini-project to less than six months if possible. This keeps everyone motivated and focused.

'Short' and 'realistic' are not always on the same planet. Delivery periods, rather than being 'short' by some arbitrary measure, need to be set in terms of the risks to delivery that they must deal with, the performance of similar projects in similar circumstances (using 'reference class' forecasting, also see) and with the sponsor agreeing to a probability target for delivery.

This might mean that the sponsor wants to know with, say, 85% probability that the deliverable will arrive by a certain date.

Your durations will need to include various buffers, an idea developed in the 'critical chain' literature, but generally applicable.

Durations also need to have an eye to performance or capability delivered and configuration issues that might attend the options or trade-offs for any capability-timing 'couples' that exist in the project.