OK kids, this is like leading lemmings into a circle and running them into a death spiral!
- it’s ok use an LLM to help you assemble your thoughts
- write your own content, don’t just copy/paste content generated from an LLM, this is lazy.
- proofread your work! do your due diligence.
- this is still a human craft.
by ElJefeDSecurit (bsky:@eljefe.social)
there’s been a recent trend being expressed by several prominent conference CFP review board members across industries that suggest a growing number of CFP’s being submitted that show evidence that they were generated by an LLM. I’ve personally seen a number of anecdotal examples of submissions, what look to me to be simple basic errors, and in my own experience, have read academic whitepapers that were easily suspect of begin AI generated content. This hits me close to home, since even I have been victim of being accused of writing a paper so good that it was thought to actually be generated by an LLM!
Let’s talk about some of these observations, where using generated content in editing tools and coding tools have become commonplace among early adopters, most notably younger generation that have a native grasp of social media and content & information discovery skills that less connected, even digital elders – sometimes fail to comprehend. this is more than generational inasmuch it is experiential as more and more individuals gravitate towards voices with reason and substance, that tell a story that resonates in ways that tap into that sense of belonging to a central truth, whether backed with facts or contextual facts that are truthy enough. I think there is a strong desire to be heard as part of the “in” crowd just to be validated in an era where people are isolated physical interactions. And, as a result of this lack of physical social engagement, users are resorting to engagement with artificial personas and agents. (insert your favorite ‘go touch grass’ meme here)
Now let’s translate this to the perspective that is the conference submission reviewer. Many times, they are volunteers, seasoned individuals who have either presented or been part of their conference circuit for multiple seasons and even been asked to help with reviews because of their expertise and wisdom understanding their audiences. They would be given the inestimable task of reviewing, in their spare time a stack of papers, 3-7 pages in length, every single one of them supposedly the blood sweat and tears of aspiring conference presenters, each vying to get their moment on stage! it’s so exciting and yet the burden is not lost on these poor souls that have to pick just a handful of the best of the best of those papers. but THEN…
😬
they start reading, and reading, and they get this uneasy sense they recognize this bulleted list of points, and the summary at the end, and then all of a sudden,
“Let me know if you’d like further refinement or additions! 🚀”
-copilot
(yes, that is the output from one of my chatbots, I couldn’t write that shit myself. come on, really? I’s writin’ HUMAN-MADE ARTISINAL CONTENT, YO!!!)
They have to look at each one of these papers for originality, quality and relevance. Well boom you just failed the first one on sniff test alone. Original? you literally copy-Pasta’d the output. how pray-tell is that original? They’re also looking for quality. Well, GUESS WHAT you just that just got questioned, since they saw that output, how can they POSSIBLY imagine there’s any quality left? Their bias has already taken over, and now – for relevance? dude you just lost on originality and quality, they won’t consider it relevant at this point, it’s done!
Look, these reviewers are NOT dumb. They’ve probably done this a number of times, I’ll bet they do it at work coz they’re that type A kind of folk they leave nothing to chance. They want to make sure there is quality and depth, that there’s solid references to past work. Nobody learns alone and if you think you do, you are lying to yourself and yourself alone. SO, they are looking for what shoulders you are looking over to see you’re not just rehashing some existing thing with fancy words. trust me they can smell fecal matter on paper.
They also want original work! using an AI to come up with a submission’s content is effectively abdicating critical thinking and self-actualized reasoning to a math equation that is predicting the next word mathematically relevant enough to complete a sentence. If all you do is run a prompt to return a generated result, then what exactly did you bring to the table? where is your idea? the question?
Tell me, How’s that fair to the other bots?
Like no, seriously, other people are also working hard on their own projects, trying to get their own visibility, putting real hard work, and you come in with a fancy prompt and a long diatribe you wrote Generated on a hungover Sunday morning? Copied into a doc and said, ‘ya that’s good enough’.
Look, I’m not surprised at all by the recent trend of conference CFP rules being updated to include terms that if they detect AI generated content that they will outright decline it, due to copyright and attribution issues. I don’t blame them. That SHOULD be the standard – that CFP submissions are generated by humans. It’s kind of like that cueball VC investor that stole mosaic said, there’s only a few spaces that only humans do, and I think frankly innovative applied research presented at conferences kind of feels like a red-line bar we should not allow to be crossed.
In my subject domain that is cybersecurity, accuracy is key, it demands precision, and we KNOW that AI generated content can run rampant, or hallucinate as some like to call it, when left to its own devices, when not grounded on facts. And that, frankly, is the fundamental root issue, is that it demonstrates a lack of technical rigor, and that undermines your credibility.
It’s not about your technical rigor, it’s about how you augment it
Now let’s be fair, did I use an LLM to write this? well, no. but did it help me shape some of my thoughts? perhaps. I asked it, “how would conference content submission reviewers react to AI generated content submissions?” and well, it did not disappoint:
If an AI submission passes basic checks but lacks depth, reviewers might be annoyed rather than impressed. However, if AI-generated content helps to synthesize complex ideas in a meaningful way, some may appreciate its role as a supportive tool rather than a replacement for genuine research
Well, not exactly wrong, at least. I do think that content reviewers are not averse to submissions that demonstrate how you can augment your research with AI, or how you leverage AI to augment your own research and accelerate and grow the community’s collective knowledge in leveraging and augmenting themselves using these tools, this is what I believe we all aspire to. I believe we will see a shift in content submissions that will require you show depth of work that exhibits critical experiential research outcomes, that go beyond just a content submission. I see policies shifting to adopt a level of rigor similar to academic institutions and applied research companies; these will become the model for baseline CFP submissions, so validate that the material at the conferences sustains that artisanal human quality bar.
Of course, I still use AI like its water flowing from the heavens, it’s absolutely simplified my work life. It’s enhanced my intel gathering and coding skills. But still, it’s only good enough to pass the butter, not do my thinking for me.
So let me end with this: If you are going to use an AI to generate your Idea for a CFP submission, then why not just let the AI present the content for you? What do you really bring to the table? Ruminate on that before you hit the send button.
May the 4th be with you.