A botched government report should be wake-up call on AI hype

1 day ago 1

Deloitte Australia botching a report to a federal government department thanks to AI doing some of the heavy lifting is a canary in the coalmine for the technology being as big a hazard in the workplace as it is an aid.

Who knew (outside the tech industry and those well versed in tech lingo) that your average AI assistant was capable of “hallucinating”?

These so-called hallucinations are created as part of the AI’s pattern-matching process, especially when the model has limited or biased training data and a lack of real-world understanding of the problem it’s trying to solve. This leads to the AI model guessing and delivering results that sound right but aren’t factually accurate.

(Word of warning and point of irony: I used AI as a reference for the above definition.)

Deloitte Australia’s headache with AI is the latest example of why the technology needs to be carefully managed.

Deloitte Australia’s headache with AI is the latest example of why the technology needs to be carefully managed. Credit: iStock

The reason AI models resort to guessing when they are short on real information is that they also possess an innate desire to please the one asking the question. And this desire means that an AI model would rather use dubious sourcing and incorrect interpretation to deliver a wrong result than nothing at all.

It’s also what has left Deloitte red-faced and, as reported by The Australian Financial Review, forced it to issue a partial refund to the federal government. The original report, which reportedly cost $440,000, was created by Deloitte for the Department of Employment and Workplace Relations (DEWR) and contained a completely fabricated quote from a federal court judgement and invented academic references.

Loading

A revised version of the document has now been uploaded to the department’s website minus the fabrications and typos. The AI-fabricated references and citations (alongside a smattering of grammatical errors) in the original document were outed by one of the academics quoted in the original report.

So Deloitte picks up an ignominious distinction and the department wins a discount. It would be fanciful to think that this is an isolated case of AI going off the rails.

Beating up management consultants may have become something of an Australian sport these days, but they aren’t the only ones using AI at work.

Corporate Australia is in particular wildly excited about the introduction of this productivity charging technology. It is the new toy that everyone wants to play with before reading the instruction manual, and most fans realise that there are plenty of wrinkles that need ironing out.

There is already plenty of evidence of companies adopting AI with an eye to eventually replace workers doing more process driven administrative jobs, but some, such as Commonwealth Bank, have learned the hard way that there is a big gulf between what AI can do in theory and what it delivers in practice.

The adoption of generative AI, in which the model learns from existing data and can then spit out text videos and pictures when asked a question, is still in its infancy, so mistakes will be made.

Deloitte declined to answer direct questions about whether AI was used to create the report.

Deloitte declined to answer direct questions about whether AI was used to create the report.Credit: Dion Georgopoulos

In Deloitte’s case, using AI to help put the report together wasn’t the problem; instead it was the firm’s laxity in getting the report checked by humans before it was stamped customer ready.

Adding insult to the injury, Deloitte markets itself as a firm that can educate its corporate clients on how to best deploy AI.

Its glossy marketing material contains the boast that “deploying Artificial Intelligence and Generative AI across an enterprise requires the same level of operational strategy and action it takes to manage a manufacturing line or complex supply chain”.

In other words, even the experts in using AI appear vulnerable to tripping up.

There has also been criticism of Deloitte about transparency around its use of AI in the report. The new version reportedly includes an explicit concession in the methodology that generative AI was used for what the firm called “traceability and documentation gaps”.

“There have been media reports indicating concerns about citation accuracies which were contained in these reports. Deloitte conducted this independent assurance review and has confirmed some footnotes and references were incorrect,” the department noted on its website.

It makes you wonder just how many worms are in the can being opened by AI. The Deloitte snafu certainly lifts the lid on a couple of big ones, and there will almost certainly be many more such mistakes in the future.

This isn’t an augment against AI, but there is a need for codified set of checks and balances to be put in place, and it’s probably going to take more unfortunate bouts of hallucinations before we get there.

The Market Recap newsletter is a wrap of the day’s trading. Get it each weekday afternoon.

Most Viewed in Business

Loading

Read Entire Article
Koran | News | Luar negri | Bisnis Finansial