In the freelance world, there’s been a fair amount of chatter about ChatGPT and whether it will replace content writers. It might, for some content.
But in my specific freelance niche, in which I interview clients and their clients for case studies, I think the real concern is whether it is ethical or safe to use ChatGPT, particularly when working with sensitive subject matter.
To explore this, I had to experiment with ChatGPT and understand its capabilities.
If you’ve been living under a rock and don’t know what ChatGPT is, it’s a natural language processing tool created by OpenAI and built on enormous amounts of data collected through 2021. Elon Musk was one of the original founders, and Microsoft, its partner, is investing billions in it. It’s free, although there is a paid model too, and once you create a login, you can ask it questions or use it to compose content.
Once I created a login, I submitted this prompt to the chatbot: Write an article about how freelancers should be careful about inputting sensitive customer information to ChatGPT and cite sources.
In just seconds, ChatGPT gave me a 300-word article written in the style and tone of a college essay that hit all the high points of the topic, cited sources, and even gave me a list of tips.
Links to each source appeared at the bottom of the article. Three of the four links led to pages not found.
I used three plagiarism checkers to see if the article was blatantly ripping anything off, and it came out clean, designated as original content.
Here are the results:
As the world of freelancing continues to grow, the use of online communication platforms such as ChatGPT has become increasingly popular. While this platform is great for communicating with clients and completing projects, freelancers need to be careful when it comes to inputting sensitive customer information. Here are a few reasons why and how you can protect yourself and your clients.Expand to read the rest of the article
Firstly, ChatGPT is not completely secure. While the platform does have some security measures in place, it is not foolproof. Hackers and cybercriminals can still gain access to the platform and steal sensitive information, putting your clients at risk.
In addition to the platform’s security concerns, freelancers must also consider their own cybersecurity practices. Simple mistakes such as weak passwords, sharing login information, and leaving a device unattended can all lead to a security breach.
So, what can freelancers do to protect themselves and their clients? Here are some tips:
In conclusion, while ChatGPT is a great tool for freelancers, it is important to be careful when inputting sensitive customer information. By following the tips above and educating yourself on cybersecurity best practices, you can protect yourself and your clients from potential security breaches.
So…with a little effort, I suppose you could edit this article to give it some life, verify the sourced information, add keywords (maybe that could be an additional prompt), and voila! You’ve got yourself a blog post.
For small businesses trying to feed the content beast and improve organic Google search results, I suppose they could use ChatGPT on their own and no longer hire freelancers to do the work for them.
Granted, it would be pretty bland content. And if you’re not a content person, you may not think to check for plagiarism or verify the sources.
ChatGPT was trained with content pulled from the internet. But from where on the internet exactly? Surprisingly, I couldn’t find the answer on ChatGPT’s FAQs page.
Instead, a Google search gave me links to blog posts that claimed the ChatGPT AI model learned to create content by being fed “books, webtexts, Wikipedia, articles” etc. I could take this at face value, but hey, that’s not what I do.
I finally discovered the source of this information: an article in BBC ScienceFocus titled ChatGPT: Everything you need to know about OpenAI’s ChatGPT-3 tool. It’s a great primer on ChatGPT and is devoid of the marketing spin you’ll get from the rah-rah ChatGPT-is-better-than-sliced-bread crowd.
The article got me thinking about what ChatGPT ingested for its learning model.
For one, books, which tend to be copyrighted.
Then we have webtexts and articles, all scraped from the internet. From government informational websites to content on — I’m thinking worst-case scenario — partisan news sites and the National Enquirer, for all I know. Maybe articles you’ve written. Anything without a paywall and all without getting anyone’s permission or providing compensation.
And let’s not forget Wikipedia, which, while often helpful, isn’t necessarily information you can trust, since it depends on volunteer writers (some with an agenda) and editors. I find good stuff in Wikipedia footnotes, but I would never rely on Wikipedia or cite it as a source for an article I’m paid to write.
If you can gather from my tone, it goes against my nature to rely on content that I can’t verify. And if you care about where your content comes from, you should be leery, too.
To be fair, you can be more specific in the prompts you give ChatGPT.
For example, I wanted only sources I could supposedly trust, so I gave it this prompt:
Write a 500-word article on transportation logistics using only sources from government websites and news websites (except for FoxNews).
Here are the results:
Transportation logistics is an essential part of the global economy, ensuring the timely and efficient movement of goods and services from one place to another. In recent years, transportation logistics has become increasingly important due to the rise of e-commerce and globalization. The government and news websites provide valuable insights into the current state of transportation logistics and the challenges facing the industry.Expand to read the rest of the article
According to the Bureau of Transportation Statistics, the U.S. freight transportation system moved over 18 billion tons of goods in 2019, valued at $21.1 trillion. The majority of this freight was transported by trucks, which accounted for 67% of the total tonnage. Rail and water transportation were the second and third most common modes of transportation, accounting for 14% and 7% of the total tonnage, respectively.
However, the transportation industry faces several challenges, including a shortage of truck drivers and the aging infrastructure. The American Trucking Association reported that there was a shortage of 60,800 truck drivers in 2018, and this number is expected to rise to 160,000 by 2028. The aging infrastructure is also a concern, with many bridges and highways in need of repair or replacement.
The federal government has taken steps to address these challenges. In 2018, the Federal Motor Carrier Safety Administration proposed changes to the hours-of-service regulations, which govern the amount of time truck drivers can spend on the road. The proposed changes aimed to increase flexibility for truck drivers while maintaining safety standards. In addition, the government has allocated funds for infrastructure improvements, such as the $1 trillion Infrastructure Investment and Jobs Act, which was signed into law in 2021.
The transportation industry is also adapting to new technologies and trends, such as the rise of e-commerce and the use of autonomous vehicles. E-commerce has driven the growth of the logistics industry, with consumers increasingly expecting fast and free shipping. According to the U.S. Census Bureau, e-commerce sales in the second quarter of 2021 totaled $231.9 billion, up 15.1% from the previous year.
Autonomous vehicles are also becoming more common in the transportation industry. In 2020, the U.S. Department of Transportation launched the Automated Vehicle Transparency and Engagement for Safe Testing (AV TEST) Initiative, which aims to promote safety and transparency in the testing and deployment of autonomous vehicles. The initiative is part of the government’s broader efforts to support the development of autonomous vehicles and ensure their safe integration into the transportation system.
The COVID-19 pandemic has also had a significant impact on the transportation industry. The pandemic led to a decrease in demand for certain goods, such as gasoline and air travel, while increasing demand for other goods, such as personal protective equipment and home delivery services. The pandemic also highlighted the importance of the transportation industry, with truck drivers and other transportation workers deemed essential workers.
In conclusion, transportation logistics is a critical part of the global economy, facilitating the movement of goods and services from one place to another. The industry faces several challenges, including a shortage of truck drivers and aging infrastructure, but the government and the industry are working to address these issues. New technologies and trends, such as e-commerce and autonomous vehicles, are also shaping the future of transportation logistics. The COVID-19 pandemic has highlighted the importance of the transportation industry and the essential role it plays in our daily lives.
But ultimately, while the article cites sources with “according to,” I’m not sure how the rest of the text within this article came to be. What mix of government blog posts or webtexts inspired or informed the article’s sentences?
We don’t know.
So now we’ve got these two articles and sources (some of which were delinquent in Article #1 example). Which begs the question: Can we trust that this article is accurate, and that the AI is telling us the truth?
Good question, and one that ChatGPT addresses in an article on its website.
First, it reminds us that ChatGPT is not directly accessing the internet in real time. ChatGPT has learned from what it has ingested so far, and “it cannot access the internet, search engines, databases, or any other sources of information outside of its own model. It cannot verify facts, provide references, or perform calculations or translations. It can only generate responses based on its own internal knowledge and logic.”
In addition, the FAQs page states that ChatGPT “can occasionally produce incorrect answers. It has limited knowledge of world and [sic] events after 2021 and may also occasionally produce harmful instructions or biased content. We’d recommend checking whether responses from the model are accurate or not.”
Well, that’s good to know. If you’re going to use something ChatGPT spits out, you need to do your homework and fact-check on your own. Which I guess we can assume everyone using this tool is going to do, right?
Pardon my skepticism.
This was important to me, because I wanted to do more than tell ChatGPT to write a random article based on existing info in its database. I wanted to test it with a transcript from an actual interview I’d conducted with a client.
But this time, the ChatGPT FAQs page kept me from doing that by brandishing these two red flags:
That spooked me, because transcripts are unedited. And they can include sensitive information, which as a writer, I make sure not to include in a final draft I provide to a client.
But wait! There’s more!
Everything you share with ChatGPT is used to improve its performance model. So any text I feed into ChatGPT’s mouth would potentially be digested and then regurgitated piecemeal into someone else’s essay, blog post or even content on a competitor’s website.
No matter how small that regurgitated morsel is, it goes against my moral and ethical compass, let alone privacy and security concerns.
I’m not the only one questioning this. Amazon recently warned its employees not to share confidential information, including software code, with ChatGPT.
This missive came about after “the company reportedly witnessed ChatGPT responses that have mimicked internal Amazon data.”
Forbes has also explored the issues of privacy and security in this article, Generative AI ChatGPT Can Disturbingly Gobble Up Your Private and Confidential Data, Forewarns AI Ethics and AI Law, written by Dr. Lance B. Eliot, an AI expert from Stanford University.
From attorneys who ask ChatGPT to review a draft divorce agreement to programmers who ask it to check a piece of code, the input is now “fodder for pattern matching and other computational intricacies of the AI app. This in turn could be used in a variety of ways. If there is confidential data in the draft, that too is potentially now within the confines of ChatGPT.”
Eliot continues: “Your prompt as provided to the AI app is now ostensibly a part of the collective in one fashion or another. Furthermore, the outputted essay is also considered part of the collective.”
I can’t help but wince — and marvel – at the word “collective” used to describe the ChatGPT universe, because it reminds me of the Borg, who assimilate entire populations in the Star Trek universe.
If you’ve already used the tool and regret adding your sensitive information to ChatGPT’s database, you can always delete your account, and supposedly that will delete all your data with it.
Hmmm… ChatGPT is going to track down every piece of data that has been parsed who knows how if I delete my account. Really?
After everything I’ve said, I’m not here to debunk ChatGPT. But I will say that it’s not a tool that I plan to use to support my clients, even if I filled out an opt-out form.
After all, my clients don’t hire me to regurgitate content from the internet. They hire me to think, to manage projects, and to write original content based on verified sources.
And they trust me not to share sensitive information with anyone.
Or, in this case, any thing.
I’d love to talk about your project. Go ahead and contact me!