AI Policies are Low, Use is High, and Adversaries are Taking Advantage, Says New AI Study

AI Pulse Poll
Author: ISACA
Date Published: 25 October 2023

Sixty-nine percent of ISACA pulse poll respondents say adversaries are using AI as successfully or more successfully than digital trust pros are

美国伊利诺斯州绍姆堡A new poll of global digital trust professionals is revealing a high degree of uncertainty around generative artificial intelligence (AI), 几乎没有公司关于其使用的政策, lack of training, 以及对它被坏人利用的恐惧, according to 生成式AI 2023: ISACA脉冲轮询.

Digital trust professionals from around the globe—those who work in cybersecurity, IT audit, governance, privacy and risk—weighed in on generative AI—artificial intelligence that can generate text, images and other media—in a new pulse poll from ISACA that explores employee use, training, 注重道德执行, risk management, 被对手利用, and impact on jobs.

纵身投入,即使没有政策
The poll found that many employees at respondents’ organizations are using generative AI, 即使没有适当的政策来使用它. Only 28 percent of organizations say their companies expressly permit the use of generative AI, 只有10%的人表示有正式的综合政策, and more than one in four say no policy exists and there is no plan for one. Despite this, over 40 percent say employees are using it regardless—and the percentage is likely much higher given that an additional 35 percent aren’t sure.

These employees are using generative AI in a number of ways, including to:

  • 创造书面内容(65%)
  • 提高生产力(44%)
  • 自动化重复性任务(32%)
  • 提供客户服务(29%)
  • 提高决策能力(27%)

缺乏熟悉度和培训
However, 尽管员工们很快就开始使用这项技术, only six percent of respondents’ organizations are providing training to all staff on AI, and more than half (54 percent) say that no AI training at all is provided, 即使是直接受AI影响的团队. Only 25 percent of respondents indicated they have a high degree of familiarity with generative AI.

“Employees are not waiting for permission to explore and leverage generative AI to bring value to their work, and it is clear that their organizations need to catch up in providing policies, guidance and training to ensure the technology is used appropriately and ethically,” said Jason Lau, ISACA董事会董事兼Crypto首席信息安全官.com. “With greater alignment between employers and their staff around generative AI, organizations will be able to drive increased understanding of the technology among their teams, 从人工智能中获得更多的好处, 并更好地保护自己免受相关风险.”

风险和剥削问题
The poll explored the ethical concerns and risks associated with AI as well, with 41 percent saying that not enough attention is being paid to ethical standards for AI implementation. Fewer than one-third of their organizations consider managing AI risk to be an immediate priority, 29%的人认为这是一个长期的优先事项, and 23 percent say their organization does not have plans to consider AI risk at the moment, even though respondents note the following as top risks of the technology:

  1. 错误信息/造谣(77%)
  2. 侵犯隐私(68%)
  3. 社会工程(63)
  4. 知识产权损失(58%)
  5. 失业和技能差距扩大(35%)

More than half (57 percent) of respondents indicated they are very or extremely worried about generative AI being exploited by bad actors. Sixty-nine percent say that adversaries are using AI as successfully or more successfully than digital trust professionals.

“Even digital trust professionals report a low familiarity with AI—a concern as the technology iterates at a pace faster than anything we’ve seen before, 随着使用在组织中迅速蔓延,约翰·德·桑蒂斯说, ISACA board chair. “没有良好的治理, employees can easily share critical intellectual property on these tools without the correct controls in place. It is essential for leaders to get up to speed quickly on the technology’s benefits and risks, 并让他们的团队成员也掌握这些知识.”

Impact on jobs
研究当前的角色如何与人工智能相关, 受访者认为安全(47%), IT运营(42%), 风险和合规(捆绑在一起), 35%)负责人工智能的安全部署. 展望未来, one in five organizations (19 percent) are opening job roles related to AI-related functions in the next 12 months. Forty-five percent believe a significant number of jobs will be eliminated due to AI, but digital trust professionals remain optimistic about their own jobs, with 70 percent saying it will have some positive impact for their roles. 实现积极的影响, 80 percent think they will need additional training to retain their job or advance their career.

面对挑战保持乐观
尽管人工智能存在不确定性和风险, 80 percent of respondents believe AI will have a positive or neutral impact on their industry, 81 percent believe it will have a positive or neutral impact on their organizations, and 82 percent believe it will have a positive or neutral impact on their careers. Eighty-five percent of respondents also say AI is a tool that extends human productivity, and 62 percent believe it will have a positive or neutral impact on society as a whole.

Learn More
Read more in the infographic outlining these findings, as well as other AI resources, including the 人工智能基础证书,赠送 人工智能革命的希望和危险:管理风险 白皮书,和一个f人工智能政策考虑的免费指南, at http://vf0.chinaqinyu.com/resources/artificial-intelligence.

About ISACA

ISACA® (vf0.chinaqinyu.com) is a global community advancing individuals and organizations in their pursuit of digital trust. 50多年了, ISACA为个人和澳门赌场官方下载提供了相关知识, credentials, education, 培训和澳门赌场官方下载发展他们的事业, 改变他们的组织, 建立一个更可信、更有道德的数字世界. ISACA is a global professional association and learning organization that leverages the expertise of its more than 170,000 members who work in digital trust fields such as information security, governance, assurance, risk, 隐私和质量. It has a presence in 188 countries, including 225 chapters worldwide. 通过其基金会One In Tech, ISACA supports IT education and career pathways for underresourced and underrepresented populations. 

Twitter: www.twitter.com/ISACANews
LinkedIn: www.linkedin.com/company/isaca
Facebook: www.facebook.com/ISACAGlobal
Instagram: www.instagram.com/isacanews

Media Contacts

communications@chinaqinyu.com
Emily Ayala, +1.847.385.7223
布里奇特·德鲁克,+1.847.660.5554

按年度划分的新闻公报