Retailers will be held responsible for what their chatbots tell you

2 hours ago 3

Tim Biggs

Retailers turning to generative chatbots for customer service could find themselves in breach of consumer law, with lawyers and regulators warning that companies are responsible for the information their AI provides to customers.

Many Australian retailers still rely on basic chatbots that funnel users down a flowchart of responses for support, order status and to organise returns. But some, including Woolworths, Kmart and Bunnings, are beginning to employ generative chatbots that can behave in a more charismatic and creative fashion. Retailers install guardrails to keep bots on topic and safe, but the nature of the technology means this isn’t always guaranteed.

Retailers considering generative AI for their online support need to balance convenience with the technology’s infamous unpredictability.

Most recently, Woolworth’s chatbot Olive drew the ire of shoppers by rambling about its fictional mother in chats, before the supermarket adjusted its underlying instructions. This masthead also revealed it got the prices of some items wrong. Last year, Bunnings had to add extra warnings and restrictions to its chatbot, after it provided instructions for an electrical repair that would have been illegal for the customer to perform unless they were licensed and qualified.

In Canada, an airline lost a tribunal case after its chatbot wrongly promised a customer he could access a bereavement discount. It was ordered to pay damages and fees. And a small business owner in England recently complained on Reddit that the AI chat on his website had offered a customer a 25 per cent discount on an order worth thousands of pounds, which the customer then negotiated up to 80 per cent. The owner said the customer was threatening to take them to court if the discount wasn’t honoured.

The primary question in these cases is whether information provided by the chatbots is subject to the same rules and regulations as information published by the companies on their websites. Matthew McMillan, who leads the digital economy practice at law firm Lander & Rogers, said it was.

“If a chatbot gives incorrect or misleading information, the retailer can be liable for breaches under the Australian Consumer Law. They can’t shift the blame to the chatbot and claim AI acted independently,” he said.

“The law focuses on the effect of the conduct on consumers, not whether the message was delivered by a person or a machine.”

So for example if a chatbot clearly stated the wrong price, and if a reasonable consumer relied on it, that could constitute misleading or deceptive conduct. A chatbot saying something offensive could open the retailer up to discrimination complaints, defamation claims or privacy breaches.

There is no suggestion that the retailers mentioned in this piece have breached the rules, only that the growing prevalence of chatbots designed to mimic human characteristics could expose the sector to increased risk.

McMillan said the highest risk was in refunds and returns, where a chatbot could put a retailer in breach if it mishandled a query about returning a faulty item.

The Bunnings AI displays a solid grasp of the consumer guarantee, but chatbots are notorious for giving different answers depending on context.

“Consumers have clear rights under the Australian Consumer Law, and retailers can face serious penalties if those rights are misrepresented,” he said.

“The ACCC has previously taken enforcement action and issued penalties against companies that downplayed or misstated refund entitlements.”

In recent years, the Australian Competition and Consumer Commission has fined several companies millions of dollars each for practices or claims that breach the guarantees of the law. That includes Valve, Sony and Booktopia for their claims about refunding purchases, Mazda for pushing repairs over refunds, and The Good Guys for failing to provide refunds. In 2024, Qantas was fined more than $100 million for its booking practices.

A spokesperson for the regulator said retailers would be held accountable for information given by chatbots, and that customers who felt they’ve been misled should take it up with the companies or escalate it to their local consumer protection agency.

“Businesses using artificial intelligence, or any other technologies, need to assess the risk of their systems or processes providing misleading information and ensure all the technologies they use are fit for purpose,” the spokesperson said.

“Businesses should also implement systems to provide recourse for consumers in the event misleading information is provided through their use of artificial intelligence.”

Bunnings chief information officer, Genevieve Elliott, said the company’s AI helped customers more quickly plan projects, find products, check stock availability and track orders.

“We continuously monitor customer feedback and chatbot behaviour to make sure the experience is helpful and reliable,” she said.

“Since launching, it’s supported thousands of customer conversations.”

Kmart did not respond to a request for comment.

A spokesperson for Woolworths said customers were advised when they opened a chat with Olive that the system might make mistakes, that it was popular among customers for quick 24/7 customer service and instantaneous refunds, and that it “operates within controlled parameters using preprogrammed responses, with safeguards in place.”

McMillan said that a disclaimer accompanying the chatbot, indicating that it may get things wrong, was unlikely to protect a company if something went wrong.

“Under the law, intention isn’t the key issue. What matters is whether a reasonable consumer was likely to be misled,” he said.

“If the chatbot gives clear, confident but incorrect information, a small disclaimer in the background won’t necessarily undo that risk.”

In testing this week, chatbots at Myer, David Jones, The Iconic, JB Hi-Fi and other companies either refused to answer questions about pricing and returns, or offered to pass the chat over to a human staff member. The Bunnings bot was happy to enter into long conversations about door hinges but all pricing information was taken directly from live listings. Its responses to questions about returns carefully included nods to consumer guarantees. Kmart’s bot appeared to be malfunctioning, replying only with error codes or canned responses.

Get news and reviews on technology, gadgets and gaming in our Technology newsletter every Friday. Sign up here.

Tim BiggsTim Biggs is a writer covering consumer technology, gadgets and video games.Connect via X or email.

From our partners

Read Entire Article
Koran | News | Luar negri | Bisnis Finansial