Over 1 in 4 firms ban GenAI over privacy, data security risks: Report
More than one in four organisations have banned the use of GenAI over privacy and data security risks, a new report showed on Monday.
Most firms are limiting the use of Generative AI (GenAI) over data privacy and security issues and 27 per cent had banned its use, at least temporarily, according to the ‘Cisco 2024 Data Privacy Benchmark Study’.
Among the top concerns, businesses cited the threats to an organisation’s legal and intellectual property rights (69 per cent), and the risk of disclosure of information to the public or competitors (68 per cent).
While 48 per cent admit entering non-public company information into GenAI tools, 91 per cent of businesses recognise they need to do more to reassure customers that their data is used for intended and legitimate purposes in AI.
About 98 per cent said that external privacy certifications are an important factor in their buying decisions, the highest level in years.
“Organisations see GenAI as a fundamentally different technology with novel challenges to consider,” said Dev Stahlkopf, Cisco Chief Legal Officer.
“More than 90 per cent of respondents believe GenAI requires new techniques to manage data and risk. This is where thoughtful governance comes into play. Preserving customer trust depends on it,” Stahlkopf added.
Most organisations are also aware of these risks and are putting in place controls to limit exposure.
About 63 per cent have established limitations on what data can be entered and 61 per cent have limits on which GenAI tools can be used by employees.
Consumers are concerned about AI use involving their data today, and yet 91 per cent of organisations recognise they need to do more to reassure their customers that their data is being used only for intended and legitimate purposes in AI.
This is similar to last year’s levels, suggesting that not much progress has been achieved, said the report.