The problem with AI-generated content on June 20, 2024
2 min readJun 20, 2024
Hello everyone. It’s June 20, 2024. I’d like to share some thoughts I had about generative AI in the context of content writing. I think the general problem with it can be best viewed by the eyes of the reader.
If I’m relying on AI to generate content for me to read and digest, large language models will do fine.
If I’m relying on AI to generate my final version of the content for my audience, large language models may need significant fine-tuning to be worth the risk of turning off readers.
How do you turn off readers with Generative AI?
- Stochastic Tone: Use a GPTish tone (the default chatGPT tone will do best) and they will skip if they care much about your thoughts. Some but not all readers care that the content wasn’t AI generated. I like these types of readers usually so I don’t want to scare them off even if it’s just a few of them. And I would venture to say that a significant majority of quality readers care that the content was polished. Now polishing may evolve and these expectations may change over time. People may be more permissive about synthetic polishing of human thoughts if the AI polisher capabilities improved a bit. This can happen with a bit of fine-tuning. For now the best approach for normal non model training enthusiast types is to just write your thoughts and if you use AI at least read it over and fix anything that is totally off. And don’t think that the structure provided by AI is great just because structuring things takes time.
- Missing key fundamentals: Generative AI should for practical purposes be thought of as a highly sophisticated bell curve with its own database. Until it’s able to commercially retrieve and incorporate new information the way humans can, AI generated content lacks significant private fundamental information that ecosystems of experts still have. This should change over time but this is my June 20 article so yeah.