A couple of weeks ago we asked readers of this blog to answer a couple of questions on their organisation’s use of (generative) artificial intelligence, and we promised to circle back with the results. So, drum roll, the results are now in.
1. In our first question, we wanted to know whether your organisation allows its employees to use generative AI, such as ChatGPT, Claude or DALL-E.
While a modest majority allows it, almost 28% of respondents have indicated that use of genAI is still forbidden, and another 17% allow it only for certain positions or departments.
2. If the use of genAI is allowed to any extent, does that mean the organisation has a clear set of rules around such use?
A solid 50% of respondents have effectively introduced guidelines in this respect. A further 22% are working on it. And that is indeed the sensible approach. It is important that employees know the organisation’s position on (gen)AI, if they can use it and for what, or why they cannot. They should understand the risks of using genAI inappropriately and what may be the sanction if they use it without complying with company rules.
Essential in the rules of play is transparency. Management should have a good understanding of the areas within the organisation where genAI is being used. In particular when genAI is being used for research purposes or in areas where IP infringements may be a concern, it is essential that employees are transparent about the help they have had from their algorithmic...
Read Full Story:
https://news.google.com/rss/articles/CBMiogFBVV95cUxPelA0bzBfRWNTWTlJN3RseDh5...