AI-tools are smart, but human creativity is smarter
AI

I’m a huge fan of AI and the benefits it brings to daily life. We often don’t even realise we’re using AI, like when I use Spotify DJ to create a playlist based on my music preferences, or when Waze suggests a faster route home based on real-time traffic updates.

New AI tools are constantly being developed, and their use cases are rapidly increasing. I recently read that South Africa has the most ChatGPT users globally, which is great, but also concerning as it highlights both the growing interest in AI technology and the potential challenges around digital literacy, data privacy and the ethical use of these tools.

I’m by no means an AI expert, but I do know that we need to understand this technology beyond its surface-level convenience. While AI has the power to transform industries and make our lives more efficient, it also comes with challenges in terms of data security and transparency. We need to be aware of how the data we’re inputting in AI platforms is being used, who has access to it and the long-term implications of sharing personal information with these platforms.

We also need to consider how it affects skills development and ultimately employment. As more tasks become automated, we have to focus not only on upskilling but reskilling our teams to integrate AI into what we do, instead of letting it take over.

This has led to the development of a new set of policies and guidelines for companies. For example, companies need to develop an “Ethical use of AI” policy that outlines best practices for ensuring transparency and accountability when using AI. Another one is a “Data privacy and AI” policy, which ensures sensitive information is protected when using AI tools and platforms.

My work requires me to read and write a lot. Whether I’m proofing a news release, editing a messaging document, or writing an opinion piece like this, I need to ensure everything makes sense, is grammatically correct and is clear enough for the target audience to understand in context.

With the rise of platforms like ChatGPT, everyone now has access to AI tools, whether as a basic user or a paid subscriber. The platform is undeniably clever. When I first started using it for editing and proofing, I was blown away and thought, “Wow, this is amazing.” However, as I continued to use it more frequently, I noticed that the output often followed a predictable formula.

Some tell signs that something has been created by ChatGPT specifically include

  • Default to American English (see all those z’s)
  • Three adjectives to describe EVERYTHING – she’s smart, funny, and amazing
  • The Oxford comma after the last ‘and’ – she’s smart, funny, and amazing
  • The use of the words ‘foster’ and ‘seamless’
  • The words ‘cutting-edge’, ‘excited’, ‘best practice’, ‘furthermore’, ‘invaluable’, ‘reshaping’ and many others
  • Did I mention ‘foster’?
  • Headlines and sub-headlines that are capped and have a colon to separate two ideas: The Ethical Use of AI: What To Look Out For

While AI can be really helpful, especially for repetitive tasks, it still needs a human hand to ensure originality and realness. AI-generated content lacks the subtlety and emotional tone required for certain audiences. I believe this is where a combination of AI and human input is non-negotiable. Use AI to enhance your work, not to do everything for you. Let it assist with efficiency, while you provide the creativity, there’s no replacement for this after all.

Scroll to Top