So much editing and rewriting that it reassured me that AI content would not work for my team at the moment.
It's reassuring that no one is publishing their AI content without editing it at all, but it's a bit of an indictment of current AI tools that exactly 50% need to do significant editing. As a copywriter, I often find having to rewrite and heavily edit content written by someone (or something) else is harder than writing it myself.
What’s your perception of AI-generated content?
All survey respondents were asked about their perception of AI-generated content, with the majority not rating it particularly highly. Over three quarters of respondents consider AI content to be no better than average quality.
Whilst these are not surprising results, I also feel that there are factors contributing to these ratings such as the quality of the prompts provided to models, and the customization of the models used for content generation, as both of these might be resulting in AI content not meeting expectations, especially for niche needs.
It's a bit of an indictment of AI-generated content that more than 7 out of 10 respondents perceive it to be average quality at best.
My sense is that this is down to low awareness of AI content tools, as much as it is a reflection of the actual quality. The technology has developed so quickly over the last 12 months that any preconceived ideas people have about the quality are likely out of date.
I find this result amazing. Two-thirds of respondents are judging the AI content to be of at least average quality. I don't know if that says more about the average quality of general content or of AI content, but the data here is literally showing that the majority of respondents are judging it to be at least on par with average human output.
In my AI content experiments I've found quality to be very poor. Short blocks of text can be fine - if generic - but the longer the piece, the more it becomes clear that AI is not up to the task. Nearly every piece I created with AI was unusable, when held up to our usual standards.
Do you believe that Google can identify when content has been written by AI, rather than a human?
Good quality content is key to SEO, but before we asked whether or not AI content has an impact on search, we wanted to know if it can identify it in the first place.
The majority (70.8%) of respondents believe Google is capable of identifying AI content, at least to some extent. Only 13% don’t think Google is able to spot it.
I think how Google treats basic queries is going to really change. AI content isn't going to kill Google, but it's definitely going to change the way that Google interacts with searchers.
Google's Helpful Content Update warned content creators about relying on automation, so you have to think that they're increasingly vigilant to AI copy.
The capability of identifying such content is one thing, entirely another is whether this is in some way used to help determine rankings or not. For a long while, researchers in the AI/ML space are working on models that help identify content from the same author, however, we are yet to see such models being incorporated into ranking systems. Recently, some portion of the rationale on why that has been was released with Google the HCU, which enforced the idea that as long as the content is helpful, it is not inherently wrong for it to be machine-generated.
Do you believe AI-generated content negatively impacts SEO when compared to human-generated content?
Those people that do think Google can identify AI-generated content were asked if it negatively impacts SEO.
Almost three quarters believe it negatively affects SEO at least to some extent, with just 6.5% thinking it doesn’t impact it at all.
Good content is good content and bad content is bad content, regardless of how it's produced. Google cares about getting searchers the information they're looking for, full stop. There's no intrinsic reason for them to penalize AI-generated content that answers a searcher's query as well as human-generated content. The only difference is that marketers can now churn out a much higher volume of bad content than they could when using human writers. In either case, though, editing, fact-checking, and optimizing are necessary to create good content that's going to rank.
At the moment, there is no data supporting the argument that the reason some content's SEO is negatively impacted is that it's machine-generated. Instead, Google's official statements are highlighting the aspect of user experience and user--perceived value, which leads me to believe that there is potential in this niche, as long as there is value to the user.
This is an interesting one, because Google's Search Advocate John Mueller recently confirmed that AI-generated content is considered to be automatically generated content, which is against their Webmaster Guidelines, and could lead to a manual penalty. However, he stopped short of claiming that Google can actually spot it.
I suspect that the less human editing is carried out on a piece of AI content, the more likely it is Google will pick it up.
If Google can identify it as AI content then yes, I think it could negatively impact SEO. And regardless of that, poor quality content negatively affects SEO - and that's currently what AI tends to produce.
Do you see any ethical challenges posed by AI-generated content?
Up to to this point our survey respondents had expressed some concerns around the quality of AI content, but one of the most interesting questions in this area is whether or not there are ethical issues to overcome.
A large majority (67.5%) of respondents are concerned about the ethics of AI-generated content.
Those who do see potential ethical challenges of AI-generated content provided a range of concerns about the technology.
Accountability for content that is offensive or inaccurate. Who takes responsibility? Its easy to blame technology.
Plagiarism is already rampant and AI will likely make it worse. Is it ethical to plug someone else's work into the ML algorithm as part of the AI learning process? Does that mean that all AI generated work is derivative?
It could lead to misinformation - for example a marketer could upload AI content for a medical client without checking it (as it's been accurate so far) and share something that is dangerous. This could go for other industries too! We have to hope that marketers use it as a tool rather than expecting the AI to do all the work - you still need to put your editor head on and check the accuracy of your little AI team member's work.
Taking money out of the hands of content creators.
People using AI without informing clients
Making people think that you created something, when in reality you didn't.
I've already talked about AI making things up, so this could be a huge issue, considering the amount of misinformation already on the internet. This could be literally dangerous - for example, content on medical conditions with errors could lead people to make decisions that are injurious to their health.
I think this might well be the most interesting question around AI content.
A lot of the challenges raised focused on issues you could argue exist with human writers anyway. Misinformation, biases, plagiarism etc. However, I think the concerns expressed by many of our respondents are that AI content tools make it easier or more likely.
I worry about the ethics of AI art generators, since (to my understanding) they have to be trained on past artists’ work, and those artists aren’t compensated in any way.
There are areas where accountability in AI is big, complex issue, but content marketing is not one of them. The rules are no different than if you're using ghostwriters or freelancers. Ultimately, it has to be the responsibility of whoever is publishing the content to verify that it's factually accurate and not offensive. This is nothing new.
Again, the only difference is that AI allows you now to create content at a much higher volume, so it's your responsibility to make sure you're not producing and publishing more than you can review and verify.
All of these specific challenges mentioned are, unfortunately, important and well-established in the research space for such technologies. Issues with bias, plagiarism, quality, and ethics are unfortunately often trumped by the capitalistic nature of these fast-growing industries, which results in the rapid development of technologies which, albeit flawed, are pushed to production, with a mere acknowledgment of their shortcomings. The users of the technology are tasked with understanding and working around the technology's flaws, which is often not realistic.
Accountability with AI content is super interesting. If I generate something that is either biased or misinformation, somebody does have to be ultimately accountable. And if we don't have those standards in place, it's gonna go off the rails.
However, the number one thing I'm worried about is bias because there's a possibility that we end up just reinforcing whatever is out there at the time of training.
A lot of people think that AI is intended to replace the writer, the creator, and the designer but in reality, it's meant to augment us as humans to be able to get to a final asset a lot faster than we would if we simply relied on our own thinking and capabilities.