AUSTRALIA’S NATIONAL SCIENCE AGENCY
Our experts explain how to ensure responsible AI is developed and deployed to benefit Australian businesses and the community.
Share
When a panel of artificial intelligence (AI) experts discuss the challenges businesses face in adopting responsible AI systems, you expect sparks to fly.
At our recent CSIRO Conversations webinar on responsible AI, they did. In fact, our AI bushfire prediction software Spark was just one of many transformative examples given of Australian businesses successfully deploying responsible AI.
Host Jon Whittle, Director of CSIRO’s Data61, painted a nuanced picture of everyday AI in Australia: with excitement and hype around products like ChatGPT, and fear around hypothetical existential threats.
But behind the media storm is a thriving research sector, with increasingly influential Think Tanks and national catalysts like the National AI Centre. So, how can we help businesses adopt responsible AI? Here are our key take-outs from the webinar.
AI is increasingly being used in critical sectors like mining, agriculture and health in Australia. However, trust in AI systems is low, and responsible practices are not given enough attention. In fact, according to the 2022 Responsible AI Index Report, if you ask most Australian CEOs how their companies are using AI, they can’t tell you or explain the impact of that use.
Judy Slatyer leads the cross-ecosystem Responsible AI Think Tank and is our entrepreneur-in-residence at Data61.
Judy stressed the importance of transparency and accountability when striking the balance between excitement and the ethical adoption of AI. She highlighted examples such as a major insurer aiming to embed AI deployment with ethical principles including transparency, privacy, human agency and control.
She said while using AI to support a faster claims process and help reduce risks for their customers, managers must maintain an accountability for the process. They must be able to explain why AI is making decisions to the CEO and board, who are in turn actively informed and involved in AI strategy and governance and provided with continuous feedback.
Judy offered practical tips for considering AI ethics, including establishing organisational principles, leveraging resources, aligning intentions, and demonstrating leadership.
Trust is the most important factor in AI deployment in Australia, yet 61 per cent of Australians are either ambivalent or unwilling to trust AI (PDF, 6.2MB).
Trust is the essential requirement for the acceptance and growth of AI. One way we can drive trust in AI systems is to make them trustworthy. We also need to make sure evidence of trustworthiness is accessible and understandable for all stakeholders.
The call to action from the panel is clear: unless we collectively build trust in the use of AI, there will be an invisible barrier to any excitement or organic adoption. The way we will do that is through leadership in this space. Business working with researchers to innovate, ecosystem collaboration to create safeguards and share knowledge, and uniting our voices to tell the benefits of AI when it’s responsible.
When the CEO and board take accountability, it unleashes the best of AI.
Rita Arrigo is the Strategic Engagement Manager of the National AI Centre. She said appointing a Responsible AI Champion in your organisation is a great way to grow a culture of ethics. Businesses, together with their stakeholders, can also set guardrails and ethical principles, incorporate end-to-end design, train staff, development leaders in AI governance, and set risk frameworks with monitoring and assurance.
“We know from talking to industry, people need help now, and change can be overwhelming. Tap into the help available – you are not alone! You can start small with responsible AI off the shelf from established providers on the AI Ecosystem Discoverability Portal,” Rita said.
Businesses can also access the Responsible AI Network, a collaboration with a range of brilliant partners to bring expertise and advice to Australian businesses on navigating regulation and standards, legal frameworks, governance, leadership, ethics, and technology.
Our experts stress that measuring impact is paramount. Being tuned into consumer and market feedback and staying attuned the impact of your products is critical.
Businesses can build responsible AI systems by incorporating ethical principles and creating clear metrics around these principles. Assurances and risk management systems with a continuous feedback loop and opportunity for input from customers, employees, developers, management and other stakeholders are also key ingredients for success.
Data61 is developing a bank that provides a collection of responsible AI questions for organisations to consider, developed from frameworks and standards with suggested metrics for success. An additional resource, the Responsible AI Pattern Catalogue, provides reusable risk mitigation practices.
The confidential Responsible AI Self-Assessment Tool from Fifth Quadrant can help you measure your responsible AI practice and provide suggestions on improving your ranking.
AI technology is being applied to significant humanitarian and wildlife challenges with a mandate to ‘do no harm’. Judy’s previous experience as CEO of the Australian Red Cross and World Wildlife Fund allowed her to describe how.
Judy has seen breakthrough use cases including the use of AI in humanitarian responses to disasters. Other cases include seeking transparency in supply chains to help eliminate modern slavery and wildlife poaching, and protecting coral reefs.
Our doors are open to work with us.
Liming Zhu, Research Director Software and Computational Systems at CSIRO’s Data61, said controlled experimentation with the help of experts is key.
“To innovate, businesses need to experiment with responsible AI and generative AI adoption by engaging with research organisations and setting up sandboxes,” Liming said.
“In one of our current projects, we are building a chatbot for a major Australian business, which has allowed both organisations to create new best practices in responsible AI and improved customer experiences.”
Liming has seen a huge amount of experimentation from business in the last six months to try and leverage new generative AI technology, as well as predictive and interpretive AI. In fact, he is seeing engineering companies entirely reinventing their approach to developing AI.
“This is exciting, but many companies are conservative in rolling out powerful AI models integrated into externally facing products and services because they are unsure about appropriate guard rails,” Liming said.
“Collaborating with research organisations can help eliminate some of this uncertainty. Reach out through our website to find out how to collaborate with us.”
We are committed to child safety and to the implementation of Child Safe principles and procedures.
Thanks. You’re all set to get our newsletter
We could not sign you up to receive our newsletter. Please try again later or contact us if this persists.
CSIRO acknowledges the Traditional Owners of the land, sea and waters, of the area that we live and work on across Australia. We acknowledge their continuing connection to their culture and pay our respects to their Elders past and present. View our vision towards reconciliation.
Find out how we can help you and your business. Get in touch using the form below and our experts will get in contact soon!
CSIRO will handle your personal information in accordance with the Privacy Act 1988 (Cth) and our Privacy Policy.
Enter a valid email address, for example jane.doe@csiro.au
A Country value must be provided
First name must be filled in
Surname must be filled in
Please choose an option
Organisation must be filled in
Please provide a subject for the enquriy
0 / 100
We'll need to know what you want to contact us about so we can give you an answer
0 / 1900
We have received your enquiry and will reply soon.
The contact form is currently unavailable. Please try again later. If this problem persists, please call us with your enquiry on 1300 363 400 or +61 3 9545 2176. We are available from 9.00 am to 4.00 pm AEST Monday – Friday.
Recent Comments