National Australia Bank: not a G’day for the Aussies

By in banking, cloud, general, management, products, security, strategy on Tuesday, 30 November 2010

In fact, not a good weekend and probably this week won’t be a whole lot better. Australia’s biggest bank is still off-line after a routine upgrade went wrong.

NAB bank's technology woes continue...

While the other banks may laugh behind closed doors about NAB’s misfortunes, truth is, its a case of There but for the Grace of God goes I.

A technical house of cards

Failures like these in the global banks are the result of a triple-whammy of factors.
Technical under-investment, poor resourcing and weak compliance.

The major banks have long regarded their technology purely as a necessary evil, with few recognising it’s real contribution to their bottom line.

The expert staff who developed many of the systems have long since moved on, tired of seeing their contribution ignored and pay eroded.

They’ve all been replaced by technically-marginal system house-keepers who’s abilities are just enough to keep things ticking, but nowhere near strong enough to keep everything afloat when the good ship banking springs a major leak.

greedy technology partners over-milking the cash cow

The problem is compounded by grabbing outsourced suppliers refusing to move the customer on from old, high-revenue legacy systems to 21st Century resilient Cloud infrastructures that don’t pay anything like as well.

With so little technical resource around, these hapless banks just get the wool – prime Australian wool in this case – pulled over their myopic eyes.

Playing dominoes with people’s lives

NAB may point to this glitch and try to excuse it. What the bank can’t excuse is the inability of remaining systems, like their on-line banking, telephone services and ATM networks inability to cope with the demand placed on them.

This just good enough attitude pervades every major retail bank across the world.
Its why no global bank has emerged from the credit crisis with anything they can remotely describe as innovative or ground breaking.

Will your bank be next?

So don’t be fooled. The bank you’ve entrusted your hard-earned money with is no better than poor NAB. Its just that they haven’t drawn the losing card this week.

And when they do, their house of cards will come tumbling down just as hard…

4 thoughts on “National Australia Bank: not a G’day for the Aussies

  1. 1

    What garbage it is you write. I’ve been reading you for ages, hoping you’d eventualluy prove my initial opinion of you wrong You haven’t.

    You always sprout this stuff with little or no knowledge of the real situation… I’ve used to work at NAB, and fyi, things aren’t nearly as terrible as you describe. Neither are they at other banks where I’ve been employed.

    What banks have you worked at? And don’t think that doing “executive support” at Barclays qualifies you to talk about managing a major IT infrastructure. If you could do better at running these things you certainly WOULD be running them. I suppose that’s why you left the big companies… you didn’t have what it takes.

    Really, it would be much better for everyone if you got a bit real world experience managing big IT before you started mouthing off like this about (it seems to me) everyone in the whole IT industry.

    Bye.

  2. 2

    Hi, Joe,

    Thanks for your comment. You don’t pull your punches, do you?

    Pity you didn’t add your reason for your view, but hey, rough with smooth, heat and kitchens, etc.

    I’ve never done exec support at Barclays. I left that to a great punch of guys who could do it better than me and who were prepared to work the the crazy hours!

    I provided the strategy. What to use, how to protect it, how to recover it, that sort of thing. But that’s by-the by.

    OK, NAB. This was a mainframe upgrade that went wrong. Now things go wrong. Bad things happen. But they shouldn’t happen and stay broken to a bank under compliance control.

    You see, this is what should happen…

    Firstly, a risk assessment is made. then a probability analysis of failure. Then a disaster mitigation plan is conceived, which includes a roll-back strategy. Sorry, lots of big words there. You still with me?

    So in real terms, NAB should have done the upgrade on their test systems, then rolled it back. Then rolled it forward again.

    NAB had a corrupted file. That should have been versioned, delay-mirrored and reloaded. Clearly that hadn’t been planned for.

    The failure to have (a) tested the plan (b) rolled it back and (c) been able to recover ONE file points to poor planning, poor execution and poor service management.

    I once worked in a bank that used IBM Risk Systems. they began the roll-out then IBM end-lifed the product half way through. Neat, eh?

    Another case saw the entire systems needing the storage to be doubled because when they added new software, no one was around to identify which of the old stuff could be deleted. So they just stuck it alongside.

    I could go on, like the way one bank used DEC PDP11 for 35 years because they could find a system to take them over.

    Now you could say these were all great banks with high technical competency, but I doubt many would believe you!

    Oh, one last thing. If NAB were technically OK, why did they have to open branches on Saturday and Sunday to deal with the fallout manually?

  3. 3

    What Neil says seem to be true. This is the latest update in the Border Mail here.

    “University of Sydney IT professor Alan Fekete said yesterday typically databases should be able to function even if corrupt data has to be re-processed.

    “It’s possible to remain viable once a computer glitch kicks in by using a variety of hardware and by keeping records so you can untangle problems after the event,” he said.

    “In this case the system stopped operating when a file became corrupted, suggesting an oversight in the design of the wider IT systems.”

    One IT expert, who has helped implement and support systems at most of Australia’s major banks, said the banks claim that the problem did not result from human error were “rubbish.”

    “Computer systems do what they are told,” he said. “If the attempt to restore the corrupt file failed, then that can only be because somebody had not tested the restore process correctly.”

  4. 4

    Thanks, Jerry,

    I maintain that many banks have reduced IT resourcing to such a level that we’re likely to see more of this stuff happening.

    Thanks for posting the Border Mail report. It adds credibility to the story!