Generative Artificial Intelligence (AI) platforms like ChatGPT by OpenAI have become more than an assistant to people in their daily lives. Serving millions of users worldwide, AI is developing at a rapid rate. At the bottom of this post, I provide clear lists of what not to do on ChatGPT. So what are the security risks?
Cast your mind back to a world without ChatGPT
I don’t write blog posts or content on ChatGPT. On principal, I write copy for my clients myself. My relationship with Artificial Intelligence began when I was studying full stack web development with The University of Birmingham. Until ChatGPT was due for release, I hadn’t heard about it. Soon it would become the hot topic of conversation between members of the cohort. Discussions focused around new opportunities for developers, disruption of the job market, plagiarism, ethics and more.
Immediately, my reaction was mistrust, for a multitude of reasons. Being a 90’s kid, I grew up in the era of horror films which focused on the evils of AI, often depicting AI as the enemy to humankind (thanks Arnie for the warning). Not to mention real world AI examples such as surveillance systems used by Israel to oppress Palestinian people’s freedom of movement. Was my mindset towards AI an out dated backwards mentality?
As we hasten to ChatGPT. Wise people fear new technology…
Sitting still I witnessed flurries of people hasten to try ChatGPT for themselves, my mind was cast back to a parallel scene from Downton Abbey:
This scene features characters who are hesitating to communicate through technology. It’s an accurate illustration of how many people felt when technology was first introduced. Your great Grandma and Grandad are likely to have reacted like this. In the past, new technology was treated with caution for fear of it being abused. Communicating via technology, especially in the aftermath of war, felt like a highly vulnerable security risk.
Downton Abbey’s chef Mrs Patmore, expresses an ongoing fear of her skills being replaced by machinery, throughout the series. A chef offers much more than the ability to mix ingredients, or toast bread. Chefs remained critical to the culinary industry, because a mixer or toaster can’t think or create. However, with the ability for machines to think, even the chef, the baker and the candlestick maker are replaceable. Yet here we are, like mindless sheep, hastening to use it. What are the real world implications for us today?
A Real Life Parallel to Downton Abbey
At the same time I was studying with The University of Birmingham, I was working for a humanitarian charity in a Mosque environment. It’s horrifying how quickly members of the team began to say: “Get ChatGPT to write x y z”. I complete projects efficiently and originally to a high standard, I am baffled as to why we outsource such skills to AI. Am I Mrs Patmore?
Seeing this as an opportunity to educate, I decided to begin warning colleagues about the ethical and safety concerns with ChatGPT. Falling on deaf ears, I realised no one want’s to listen. Soon my eyes became fine-tuned to ChatGPT’s style. I have a sensitive radar for recognising ChatGPT’s style. Cringy copy written by ChatGPT lacks originality, style, authenticity, intelligence and real human tone. Artificial Irony. Honestly reading the AI regurgitated vomit that comes through on ChatGPT, I cringe. It’s not a good look for your organisation or business, suggesting a lack of skill and genuine intelligence. Audiences are becoming more sophisticated at recognising exactly this. The dislike and mistrust of AI has gained traction. Mrs Patmore – You’re not alone after all!
Above the cringe factor, what concerns me most is privacy and ethics. Is ChatGPT safe?
ChatGPT’s Safety – The Truth
It turns out that my reserved nature towards AI is not my backwards Victorian ancestor’s attempting to drag me back into the nineteenth century, but genuine intuition. Today, I sit here reading warning after warning about the dangers of using ChatGPT and AI. I have known from day 1 that AI contains a huge potential for data leakage. Most people using AI have little understanding of how the app works. If you did, you would never tell it the things you do.

ChatGPT Safety Issues
- Data Breaches – The Privacy Policy of ChatGPT states that OpenAI can disclose your personal data to service providers, vendors, third parties and government authorities “in compliance with the law,” affiliates, business account administrators, and more. Given that the paid version of ChatGPT requires your name, date of birth and email address – there is a serious concern for the safety of your data.
- Privacy – Never disclose confidential information to ChatGPT. Confidential and sensitive details you enter are defaulted as public. ChatGPT holds your history for 30 days. A reason stated for doing this is to “develop” its services. Disabling the option for “Improving the model for everyone” allows you to prevent your chat’s being used in future development and training. Never share anything with the app that you don’t want to be sharing publicly. Your face is sensitive biometric data – don’t share it with AI.
- Phishing – Due to the development of AI, phishing scams have become harder to identify. Recent examples include scammers imitating your bank or even the voice of a loved one, to ask for an urgent transfer of funds. ChatBots are being used as convincing AI services, harvesting sensitive data.
- Fake news and misinformation – Anyone can publish whatever they like online without qualifications or credentials to back them up. As AI scans data, it can’t scan for the accuracy of information. This leads to inaccurate results. A recent example a woman was told by ChatGPT that it was safe to handle a mid-identified poisonous plant, after ChatGPT misidentified it for a harmless specimen, putting her life directly at risk.
Yes, the intelligence of AI is artificial. It makes mistakes.
There are more risks associated in using ChatGPT. However the above points set the foundation, for what not to do with ChatGPT and AI in general to stay safe.
What shouldn’t I share with ChatGPT?
- Passwords and login credentials.
- Personal identifiable information (PII) – Your full name, date of birth, phone number, address, email address, national insurance number, your children’s names and pet’s names.
- Confidential or sensitive data – E.g. Internal company documents, private matters and medical issues.
- Bank details – Never share any financial information with AI.
- Intellectual Property – Of any kind.
- Your face and the faces of your loved ones.
How can I use ChatGPT safely?
Yes you can. By treating AI as a public database from which to draw information and tweaking your settings, you can remain safe using the platform. However, don’t follow social media trends asking you to provide sensitive data. Furthermore, make sure you only share with ChatGPT what you are willing to share with a public audience on social media. Treat it as an assistant, not a replacement for creativity or thinking. Remember that AI is just technology. It doesn’t have feelings, it can’t help you through struggles in life, it doesn’t care about the real life consequences of sharing your data.
AI facial recognition technology is becoming more common across the UK, from your local Asda to a rugby game. This technology oppresses Palestinian people. By sharing your face with ChatGPT, you potentially give facial-recognition surveillance systems a clear image of yourself.
Don’t let AI replace that human-human creativity and connection.
Finally, I hope this guide helps you to reflect on your relationship with AI and how to stay safe using AI technology such as ChatGPT.
Share your thought’s and comment below!




