It’s too late to ban your developers from taking advantage of generative AI. They’re already using it.
And whether it’s proven or not, devs feel more productive with GenAI.
After 2023 showed cautious adoption of AI, a new Sysdig survey reported a 500% growth in the number of workloads running AI and machine learning packages in 2024, with the number of GenAI packages more than doubling.
Yet, about three years into the current GenAI movement, less than 40% of organizations have policies in place to mitigate generative AI risks to cybersecurity, intellectual property management and privacy, according to a McKinsey Global Institute survey released in March 2025.
Your organization needed a generative AI policy yesterday. Its compliance, security and reputation rests on it.
At CTO Craft London last week, Glyn Roberts, CTO of digital solutions at Vention, walked the audience through how his software outsourcing agency, with more than 3,000 engineers, worked together to develop its own generative AI policy.
Each industry, organization and even department will have different generative AI use cases and needs, but read on as Roberts offers a framework for how to build your own GenAI policy.
For Starters: Yes, Your Devs Are Using GenAI
Early in 2024, Roberts said, “I was asked, ‘Do we need to be concerned about AI tools being used by our teams?’ My response was: ‘Nah. We’ve got strong ISO policies. We’ve got regular training and assessment for all our staff.’”
Vention already had in place robust data classification, information security, data privacy and web materials policies, so Roberts wasn’t worried.
The very next week, the Read.ai meeting summarizer started popping up in meetings. At first, it seemed convenient. But at the time, in order to read a teammate’s generated transcript, you had to install the tool. Both clients and colleagues started installing it and suddenly there were five extra AI participants on any given conference call, automatically listening in.
It was difficult to uninstall, Roberts said, and when he did some digging, he noticed that the Read terms of service at the time included:
- Allowing undisclosed third parties to access user data, like personal information.
- Storage of users’ audio and video information for up to two years.
In addition, Read had and continues to use a license to broadly use, reproduce and analyze user content.
This sudden change to meetings wasn’t the only one. With a lot of non-native English speakers on his global team, Roberts said, they started generating “improved-ish” content, including for project requirement write-ups.
However, he later told The New Stack, while the “improved” output started out really well, “after a couple of paragraphs, the content started reading more like a story you would tell children ’round a campfire. Still relevant, but the structure morphed.”
Most notably, however, it was suddenly taking development teams less time to get the same tasks done.
Roberts realized that people across the organization were already using generative AI. It was indeed time to set an official AI policy.
What’s Already Being Used?
An AI policy will be particular to each organization. Before you look to the future, it’s likely best to start with what your colleagues are already using.
Roberts kicked off with a survey, asking anonymously by department:
- What AI tools are you currently using?
- How much do you believe that you understand AI?
- How excited are you about AI?
There were no wrong answers. Plus, it’s not just internal tools. As a software agency, Vention’s employees don’t just use GenAI for themselves; they also use whatever their clients want them to use.
There was a wide range in the number of unique AI tools identified per department:
- Delivery: 28
- Management: 26
- Sales: 18
- Marketing: 10
- Legal: 4
- Operations: 3
Roberts was most surprised that the legal team was already using four GenAI tools, as he finds them exceptionally risk averse. ChatGPT and Grammarly were used widely across departments.
He also discovered some generative AI tools that, at the time, weren’t as well-known in the North American and European headquarters. For example, Kling AI was popular with developers in Uzbekistan, before it became more known in the U.S. or U.K.
The survey also found that the more colleagues believed they understood AI, the more enthusiastic they were to adopt and use it.
What Are the Business Priorities?
Once you know what’s being already used, clarify your priorities as a business.
Roberts established a cross-functional team that represented Vention’s six main departments across each of its 12 country locations. Together, this team identified three generative AI priorities:
- Understand what is happening to its data, particularly across borders.
- Understand risks, including to overall trust and to compliance to such regs as the European Union’s General Data Protection Regulation (GDPR) and the U.S. Health Insurance Portability and Accountability Act (HIPAA).
- Remain innovative.
The global cross-functional AI policy team at Vention reviewed each tool against these three organizational priorities. Then, beyond these generative AI tools, there was a consideration that AI is being added into every product being used, including throughout Microsoft 365, which Vention and its customers use heavily, as well as within Slack, Figma and Notion.
“With this AI race underway, our data has now been used for more things. All the [Software as a Service] products that we’re using are using AI just automatically, so there’s a high-risk exposure,” Roberts said. “Just because you approved the tool three [or] four years ago as great for the business, as AI has been added, you need to reconsider what the impacts are now on how they’re using that data.
“So many tools are integrating AI quietly — stealth adoption, I call it — and obviously the data you have could actually be very sensitive.”
A generative AI policy, Roberts argues, should consider:
- What you want to achieve.
- Any tools that are being used, suitable and approved.
- Legal and security requirements.
- Free versus enterprise packages.
“Obviously, if it’s free, you’re not paying for the product,” he said. “You are the product, so therefore don’t expect any real security or privacy around that.”
This team meets a couple times a year to update the AI policy, to consider and approve new tools, to develop and document training plans, and to contribute to a company-wide knowledge base.
Follow the Data
AI use cases are incorporated into the policy, always clarifying data usage.
- Public data: Information that is publicly known and not confidential can be used with both unapproved and approved AI tools.
- Private data: Asset information, contact information, systems data and database schema are all considered confidential, and are allowed to be used with approved tools. However, they are not allowed to be used with unapproved tools.
- Restricted data: Similarly, business strategy and intellectual property data are allowed to be used only within AI tools on the approved list.
- Client data: Including source code, security tool data and the client’s documentation, these types of data are only allowed to be used within AI tooling approved by the client. Clients also can supply the licensing to use their own AI tools.
Increasingly, clients are now asking for Vention’s AI usage policy.
The International Organization for Standardization’s newest standards are a good place to start with any AI policy consideration.
- ISO 42001: AI governance and risk management
- ISO 27001: Information security
- ISO 27701: Privacy management
- ISO 37301: Compliance and ethical culture
Whatever your AI restrictions or enablements, make sure that you communicate your policy across your organization. And emphasize that AI does not replace human intelligence.
Roberts said he and his team have clearly communicated, no matter which tool or which department is used, a simple message: “You’re still responsible for the output. You’re still responsible for the outcome.”
He clarified, “Yes, you can use AI with the boundaries in place, but you can’t go, ‘Oh the output was terrible. AI did it.’”
Keep Your AI Policy Current and Practical
The industry and AI tooling are evolving so fast that any AI policy runs the risk of going out of date as soon as it’s agreed upon.
“It’s great to have this document already in place, but the limitations of it are that it’s fixed. You don’t change it regularly,” Roberts said, so the agency needed more updatable options for internal usage, beyond the legalese of the more rigid AI policy.
“As things change so fast in the industry, we need to be more specific about use cases of how to actually do this and utilize the AI tools. We generated AI use case examples for all roles, individuals.”
These can change more frequently, so therefore this update isn’t part of the external AI policy shared with clients, but rather is incorporated into the shared organizational knowledge base. For each tool in this searchable knowledge base, there are AI use cases.
“Obviously, if it’s free, you’re not paying for the product. You are the product, so therefore don’t expect any real security or privacy around that.”
— Glyn Roberts, Vention
For example, AI for generating and tuning source code, test cases and any other artifacts is permitted in both approved and unproved tools only following written consent from the client or if the client has provided the appropriate license.
On the other hand, AI is forbidden to be used across the board — in either approved or unapproved tools — for any shortlisting, summarizing or processing of resumes or in hiring decisions.
“It’s easy to look at what to do every time that a new member of staff says, ‘I don’t know how to achieve this,’” Roberts said. “Based on policy, we can then add it to the list and give another example of how to utilize it.”
Whatever your AI use cases are, they shouldn’t be holding back your developers. Brakes on a car enable it to go faster, Roberts reminded the audience. And if you don’t provide your employees with these AI tools, it’s likely you’re increasing your risk — either you lose your talent or they use the tools behind your back.
“If it’s not defined, your staff are going to define it for you, and what they think may be acceptable could not be acceptable at all, potentially, in your view,” Roberts emphasized to close his talk. “This is why a form of AI policy should really be there.”
The post How To Create the Generative AI Policy You Needed Yesterday appeared first on The New Stack.