How Business Leaders Can Responsibly Embrace Generative AI

How Business Leaders Can Responsibly Embrace Generative AI
Author: ISACA Now
Date Published: 18 December 2023

Ensuring usage of generative AI aligns with an organization’s risk tolerance is no easy task, and ISACA survey findings show this is an area that is receiving insufficient focus across the enterprise landscape.

ISACA’s recent Generative AI poll indicates that only 28% of organizations say their companies expressly permit the use of generative AI, yet 41% report their organization’s employees are using it regardless, and another 35% aren’t sure.

In a Forbes article providing analysis of how business leaders can make generative AI adoption successful at their organizations, author Phillimon Zongo, CEO of the Cyber Leadership Institute and a past ISACA Sydney Chapter board director, highlights three key considerations leaders can use to drive responsible adoption of platforms such as ChatGPT, Bard and other popular generative AI tools:

Establish a strong tone at the top

Although the use of generative AI comes with risks, it is probably counterproductive—and unrealistic—to simply ban the technology from the workplace.

As Zongo writes, “Turning a blind eye or banning this transformative technology may create significant fault lines. First, employees might be tempted to bypass established governance processes and experiment with sensitive data on shadow computing platforms, creating deeper security and privacy issues. Second, when ambitious employees feel constrained by lethargic cultures, they join forward-leaning competitors.”

Understand legal ramifications

Policymakers and regulators are scrambling to keep up with the implications of generative AI, given how fast it has emerged over the past year. Enterprise leaders need to work with their legal teams to understand how new policies and regulations can impact their organization.

According to Zongo, “Once the policy requirements have been established, the next step is carefully working with your legal experts to understand the implications of applicable laws governing the use of generative. Blindly embracing generative AI can open serious legal issues. Worryingly, several lawsuits have already been filed in the U.S. against companies that allegedly trained their models on copyrighted data without authorization.

“Crafting the appropriate legal responses can be a complex undertaking for organizations trading in multiple jurisdictions because, until recently, there have been very few global laws governing the use of generative AI. Regulators are, however, starting to act, albeit in disjointed and sporadic approaches.”

Create opportunities for generative AI to augment human cognition

Generative AI brings remarkable capabilities to create content, increase productivity, and more, but as with several other potent technologies, how that ends up impacting certain job roles over time comes with uncertainty—and anxiety.

Zongo writes, “One concerning finding from the ISACA survey relates to deep uncertainty around generative AI. Only 6% of respondents said their organizations are providing training to all staff on AI. This is worrying, given that at least 45% of respondents expressed concerns that generative AI will also replace many jobs. Left unaddressed, employees fearing generative AI may replace their jobs may actively resist change, derailing program success. To cope with this potential disruption, management must deliberately invest in programs to upskill their workforce.”

Editor’s note: Read Zongo’s full Forbes article on generative AI here.

Additional resources