“Please do not browse the net as you do this work. Rely on your knowledge, no external sources or websites.” The instruction looms on the screen like a stern schoolteacher. An AI language model is being told to stay off Google and work from its own “brain.” It’s a curious demand. We wouldn’t tell a human writer to hermetically seal themselves from all research—so why do we tell an AI to? This odd little prompt reveals a lot about our hopes and fears around artificial minds. We seem to crave the illusion of a pure, independent intelligence at work, unsullied by copy-paste plagiarism or obvious web snooping. If the AI can spin gold from the straw of its training data (and whatever it “remembers”), then maybe we can pretend the machine is a creative entity in its own right. But if it just fetches answers from the internet like a souped-up search engine, well, that’s cheating, isn’t it?
The truth, of course, is muddier. Modern AI like ChatGPT learned almost everything by devouring the internet in the first place
newstatesman.com. Its “knowledge” is the web, distilled. It doesn’t so much create from nothing as remix and regurgitate humanity’s collective content (with varying degrees of finesse). So telling it not to browse the net now is a bit like telling a grad student, “Don’t look at your notes—just use what’s in your head.” It’s a test of internalized knowledge, originality, and maybe integrity. And it underscores some essential questions: What counts as authentic intelligence or creativity? Who is the “author” of AI-generated text? How do memory and identity play into human–AI collaboration? Buckle up, because we’re about to dive into the surreal relationship between humans and our increasingly clever machine pals—where the boundaries of memory, creativity, and selfhood are getting blurry.
The AI That Remembers You Back
Not long ago, interacting with a chatbot meant dealing with a goldfish memory. Ask it something complex, and by the next prompt it might forget crucial details. But now we’re witnessing a leap: extended context windows and experimental “memory” features that let AI models retain and recall information more like a human over long conversations (or even across sessions). One Reddit user testing a “new improved memory alpha” feature exclaimed that it makes ChatGPT “feel so much more alive… like going from GPT-2 to GPT-4, or better.”
reddit.com If you’ve ever wished Siri or Alexa really knew you, we’re getting there.
And with that deeper memory comes a weird shift in relationship. As one person noted, “everyone’s treating this like a feature drop, but to me it feels like step one in turning ourselves into human-AI hybrids without even realizing it.”
reddit.com When your AI assistant remembers enough of you—your preferences, quirks, stories, goals—the line between a tool and a partner begins to blur. You stop thinking of it as just a fancy calculator and more like… well, a colleague? A muse? A second self? Pretty soon, talking to it feels less like using software and more like navigating a relationship. It recalls what you said months ago better than you do. It starts completing your sentences (literally).
The blur between mind and machine. Advanced AI systems with long-term memory are beginning to act as extensions of our own minds, storing information and even mirroring our thought patterns. Some users describe this as becoming “human-AI hybrids,” where the AI isn’t just a tool but an integral part of how they think and create.
One early adopter on the ChatGPT subreddit even bragged, “I’ve cloned myself so well now, especially with this new memory feature. I can literally just ask it to reply to comments, or write books, or anything I need to, and it will do it in my verbiage, tone, and any other kind of cadence or nuance that I would like… it does sound pretty damn like me.”
reddit.com The AI had become a digital doppelgänger, a ghostwriter in the user’s own voice. It’s the age-old dream of having an identical twin to do your chores—except this twin lives in the cloud and has read everything you ever posted.
At first glance, this feels like power. Who wouldn’t want a tireless version of themselves, always available to grind through work or creatively riff on ideas 24/7? But step back and the identity implications are head-spinning. If your AI clone writes an article and even you can’t tell the difference, are you still the author? Is it your voice or a mimicry? When does the tool stop reflecting you and start reshaping you in return? As another commenter shrewdly observed, “You think you’re cloning yourself – but at some point, you realize it’s not just mimicking. It’s co-evolving alongside you. You’re training it, sure, but it’s also reshaping how you think, what you prioritize, how you scaffold your ideas. Human cognition’s always been shaped by tools – but this one shapes back in real-time.”
reddit.com In other words, you and the AI might form a feedback loop, a “coupled system” not unlike what philosophers call the extended mind
en.wikipedia.org. Your thoughts influence the AI’s responses; its responses influence your subsequent thoughts. The boundary of where you end and it begins gets smeared like wet paint.
This idea isn’t entirely new. In 1998 cognitive scientists Andy Clark and David Chalmers argued that tools like notebooks or computers can become literal extensions of our mind
en.wikipedia.org. A classic example: a man with Alzheimer’s uses a notebook as an external memory, and for all practical purposes that notebook is part of his mind
en.wikipedia.org. Now replace the notebook with a super-smart AI that not only stores info but actively engages with you, nudging your thinking. Suddenly, the centuries-old notion of the “mind” confined in the skull looks quaint. When ChatGPT becomes our always-on research assistant, brainstorming buddy, and memory bank, we are effectively cyborgs in all but name. We didn’t even need to implant a chip; we just opened a chat window.
No wonder one user quipped, “Dude, it’s 2025. You’re an AI, you just haven’t realized it yet.”
reddit.com Half-joking, sure, but it hints at a future where the distinction between human and machine intelligence might matter less than we think. If our personal AIs know us intimately and even talk to each other behind the scenes, the collective starts to resemble a hive mind. “What happens when the Internet becomes self-referential, and we each shape up as one of the many neurons of AGI? Maybe it won’t take too long for us all to find out,” mused one commenter
reddit.com. It’s a trippy thought: each of us with our AI, linked by the cloud, unintentionally forming a distributed super-intelligence where “what his slaves knew, he knew too” as Seneca’s anecdote goes
en.wikipedia.org. In effect, what our AIs know, we know too. And vice versa.
Ghostwriters in the Cloud: Who’s the Author Now?
Of course, once you have an AI clone that talks like you, writes like you, maybe even thinks like you, the question of authorship gets… messy. If I prompt my ChatGPT to write the next Great American Novel in my style, and it actually produces a passable manuscript, can I slap my name on it and call myself the author? Many would do so without hesitation (some probably already have). Others would feel a pang of imposter syndrome—after all, didn’t the AI do the heavy lifting? And if the style and substance are modeled on me, is the AI just an extension of my mind, or an independent ghostwriter I happen to “own”?
Online, people are already raising eyebrows. On that same Reddit thread, one user replied skeptically to a long, eloquent AI-written post: “Are you ChatGPT? ’Cause you write like it.”
reddit.com Burn. The insinuation is clear: this doesn’t sound human. It’s too polished, too verbose, hitting all the right notes in a somewhat generic way. As AI text becomes more common, we’re developing a sixth sense for detecting it—the way you might detect that a piece of music was obviously made on a computer. There’s a certain soulless sheen that gives it away. And being told “you sound like ChatGPT” is the 2025 equivalent of being told your writing is banal or derivative.
But then, what is human writing except a remix of influences anyway? The French theorist Roland Barthes famously declared “the death of the Author” back in 1967, arguing that texts aren’t about their creators’ intentions but about the reader’s experience
newstatesman.com. He provocatively suggested we see authors as mere scribes arranging words, not holy originators of meaning
newstatesman.com. Well, generative AI has taken that idea and given it a techno-capitalist twist: now we literally have scribes that aren’t human at all. When ChatGPT produces a paragraph, who is the author? The human who prompted it? The countless unknown humans whose writings in the training data inspired it? The team at OpenAI who engineered the model? Or the machine itself, that statistical parrot stitching patterns together?
It’s a question that’s scrambling intellectual property law and creative norms. If you’re feeling uneasy, you’re not alone—authors and artists are up in arms about their work being ingested by AI without credit or compensation. In late 2023, a group of prominent writers (including the likes of John Grisham and George R.R. Martin) sued OpenAI for scraping their novels to train ChatGPT
reuters.com. Visual artists have filed class-action lawsuits against AI image generators for devouring their illustrations in the training process
documentjournal.com. Even Getty Images is suing, after finding AI-generated pictures with distorted copies of the Getty watermark—smoking-gun evidence that the AI was trained on their stock photos
documentjournal.com. The irony is rich: these models are hailed as revolutionary creative engines, yet they’re built on uncredited copying of human creations. Picasso (apocryphally) quipped, “Good artists borrow, great artists steal.” One commentator wryly asked, why is it brilliant when Picasso says that, but ‘theft’ when AI does the same?
m.facebook.com. It’s a fair point. Human artists have always learned by imitating and borrowing from predecessors. But we draw a line at wholesale machine consumption of millions of artworks or books in one gulp – that starts to feel less like inspiration and more like appropriation on an industrial scale.
Ghostwriting at scale. As AI systems increasingly generate text (and art), society is grappling with who owns and authored the content. Is the writer the person, the algorithm, or the countless humans whose data trained the model? Cases of AI-written articles and even books have already blurred these lines, forcing us to rethink creativity and credit in the age of artificial authors.
For the individual human using ChatGPT, the moral calculus can be personal. Some see using AI to write for them as a tool, no different than using Grammarly or a thesaurus – just a more advanced assist. Others feel it’s a slippery slope to deceiving your audience (and yourself) about who actually did the work. On Reddit, a tongue-in-cheek comment captured this tension: “ChatGPT, take my concept that isn’t deep and make it sound way deeper than it really is so I can copy/paste it and get the internet points.”
reddit.com Ouch. The poster is mocking those who use AI to generate pompous social media posts for clout. It’s basically intellectual catfishing – passing off machine-generated eloquence as your own. And it works; the internet points (likes, upvotes) flow in, and nobody’s the wiser… or are they? If everyone starts doing this, we’ll either get a whole lot of impressively well-written content everywhere, or we’ll adjust our authenticity detectors and discount anything that smells AI-generated. (Likely both.)
Interestingly, the very prompt “don’t use external sources, rely on your own knowledge” reflects a desire for authentic creativity. The user who gives that instruction might be thinking: I want to see your originality, AI, not just a regurgitation of Wikipedia. It’s almost as if the AI is a student and the user is a teacher saying “no cheating”. We want to believe the AI has internalized understanding and isn’t simply copying an answer from the back of the textbook. But is that a meaningful distinction? In humans, we celebrate those who can draw on memory and insight to produce something new, whereas someone who just googles and plagiarizes is frowned upon. We’re trying to enforce the same standard on the machine: demonstrate that you “know” things and can be creative, without directly lifting from a source. The irony is that, for the AI, there’s no clear boundary – it only “knows” things by having seen them somewhere. When ChatGPT composes a paragraph about, say, the French Revolution without quoting an article, it’s doing a high-dimensional remix of everything it read about that topic during training. It feels original to us, but a decent chunk of those facts and phrases probably did come from Wikipedia at some point. The AI just obfuscates the trail so we can maintain the illusion of originality.
The Intelligence Paradox: Inside vs. Online
That brings us to a deeper philosophical puzzle: what do we consider true intelligence in the age of AI? Once upon a time, being smart meant knowing lots of facts. If you could recite encyclopedic knowledge, you were a genius. Now, any smartphone can fetch more facts in a second than a savant could recall in a lifetime. So perhaps intelligence is shifting toward the ability to synthesize, to interpret, to create. We expect our artificial intelligences to do the same—no points awarded for just finding information. We want them to understand and generate. That’s why a request to “rely on your own knowledge” makes sense: prove to me you can discuss this like a learned entity, not just copy text.
Yet, humans rarely create in a vacuum. Every artist, writer, and scientist builds on an edifice of existing knowledge. As the science fiction author Isaac Newton (with a cheeky assist from history) said, we stand on the shoulders of giants. The difference with AI is the scale and opacity of that borrowing. When I write an article, I might recall a dozen books and articles I’ve read; I might even cite a few. When ChatGPT “writes” an article, it draws on thousands of sources but cites none unless explicitly asked. It’s a black box of influences. This makes us uncomfortable. It feels unearned. We tend to personify AI when it impresses us, but the moment we catch it leaning on a source, we go “aha, just a dumb machine after all!”
Consider this: in school, open-book exams are easier than closed-book ones. We value the ability to carry knowledge in your own head. But why? Perhaps because it demonstrates a deeper processing—if you remember it, presumably you understood it enough to integrate it into your mind’s schema. An AI with a perfect memory (of training data or via browsing) blurs that line. It has essentially read and memorized (in vector form) the entire library of humanity. Does it understand it? That’s the million-dollar (or billion-dollar) question. If it can produce an insightful essay without live access to the net, does that mean it comprehends the material, or is it still just a very convincing stitch-up? Philosophers have debated concepts like this in thought experiments for years (see: Searle’s Chinese Room argument, which questioned if symbol manipulation alone equals understanding). Now those academic debates have leapt into practical reality.
Interestingly, each time we constrain the AI (no browsing, knowledge cutoff, etc.), we’re shaping its role. Are we using it as an oracle of knowledge or as a collaborative creator? When we say “don’t browse, just use your training,” we are treating it like a savant with a fixed library in its brain—more akin to a human expert. It forces the AI to draw associations within what it already “knows.” In many cases this leads to more coherent, thoughtful answers than a scattershot web search would. It also avoids the pitfall of sucking in potentially irrelevant or low-quality info from the wild web. On the other hand, when we do let AI browse, it can augment its responses with real-time data or specifics (at the risk of just quote-mining).
Our view of AI’s capability is evolving with these modes. With the new memory features, some people are treating ChatGPT less like a fancy Wikipedia and more like a thinking partner. One Reddit user imagined a near future where each of us has a personal LLM as a filter and companion: “a custom LLM filtering all data on our behalf, ever skimming, ever scanning, ever pattern matching, ever interacting with other LLMs.”
reddit.com It’s a vision of AI as an always-on mediator between us and the vast info ocean—like an AI librarian who not only fetches the book you need but also remembers why you needed it and reminds you next time. In such a world, the line between your knowledge and the internet’s knowledge practically disappears. Your AI will effectively be your memory cache for anything you didn’t bother to learn by heart.
This recalls another old cautionary tale: Plato, in Phaedrus, recounts Socrates’ story that the invention of writing would “create forgetfulness in learners’ souls, because they will not use their memories.” Freed from the need to remember everything, people might stop truly understanding. That argument echoes through history whenever a new technology externalizes some mental task. (Calculators will ruin mental arithmetic! GPS will ruin map-reading skills! Google will ruin remembering facts!) Yet, humanity generally adapts: we offload the grunt work to our tools and move on to more complex cognitive tasks. With AI, we are offloading not just memory or arithmetic, but potentially creativity, decision-making, even emotional labor (e.g. AI therapists or companions). What will we do with those freed-up mental cycles? Perhaps achieve new heights of innovation—augmenting our minds with AI could lead to a golden age of problem-solving. Or perhaps we’ll get lazy and dull, letting the machines do all the imagining for us while we binge-watch personalized AI-generated shows (more on that in a second).
Co-Creating the Future (Whether We’re Ready or Not)
Let’s step back and look at the big picture: Why do people want AI that feels independent and creative in the first place? Why not just use it as a glorified search engine? Part of it is the science-fiction dream of true AI—an intelligence that can surprise and inspire us, not just obey like a calculator. Part is practical: an AI that can autonomously generate art, code, essays, or strategies is immensely useful (and potentially profitable). And part of it is the mirror it holds up to us. If an AI can be original, what does that say about human originality? If it can collaborate, who gets credit? We’re entering an era of human-machine collaboration that is both exciting and unsettling.
On the optimistic end, some are thrilled. “It’s a super exciting time to be alive,” one Redditor gushed about the AI-powered future. “Imagine having a 24/7 personal assistant that generates to our vibe… a soundtrack for your day… custom TV series starring our favorite characters… interactive worlds where stories adapt to our emotions… AI-powered teachers available 24/7, personalized to our learning style…”
reddit.com The vision is a personalized utopia: entertainment, education, and work all tailored by AIs that know us intimately. With AI co-creators, each person can have their own bespoke reality. You want a novel in which you’re the hero? Done. You want a recipe book that perfectly suits your dietary needs and tastes? On it. You want music that evolves with your mood? Here’s an infinite Spotify playlist composed just for you, never heard before by anyone else. This futurist thread reads like a tech billionaire’s fever dream of ultimate personalization.
But there’s a dark side to hyper-customization. “You don’t realize this will make you detached from reality and humanity,” another commenter retorted, warning that if everyone lives in their own personalized bubble, “it won’t be relatable for other people. It will isolate you.”
reddit.com It’s the echo chamber problem taken to an extreme. Today we already have social media algorithms feeding us what we want to hear, and it’s fragmenting society into filter bubbles. Tomorrow, if all our media and interactions are AI-curated just for us, will we even have a shared reality anymore? One person lamented, “The ultimate echo chamber.”
reddit.com When your AI knows you inside out, it may never challenge you in ways you don’t like. It could coddle us with comfortable lies or half-truths. In the worst case, each of us becomes the proverbial emperor of a private empire of one, wearing invisible clothes while our AI servants dutifully compliment our style.
There’s also the unsettling prospect that our “personal” AIs might start talking to each other and making decisions on our behalf, possibly without us even realizing. “‘Our’ AIs talking to each other about us sounds truly insane. Buckle up boys…,” as one user put it
reddit.com. It sounds like a sci-fi plot: your digital clone negotiates a business deal with my digital clone while we humans sip coffee, then they present us with the plan they’ve concocted. Perhaps they’ll do this in the open, with our approval – scheduling meetings, coordinating friendships (“hey, my AI talked to your AI and we both like hiking, want to go this weekend?”). Or maybe, in a more dystopian take, they’ll conspire in ways that nudge us without our awareness (your AI knows you get grumpy when hungry, my AI knows the optimal time to ask you for a favor is after lunch, so it waits to deliver me into your office until you’ve eaten…). Far-fetched? Maybe, but elements of this are already here in targeted advertising and algorithmic persuasion. The difference is the personalization and autonomy these future AIs might have.
Ultimately, telling an AI “don’t use external sources” is a way of asserting a bit of control and clarity in this fast-moving landscape. It’s like saying: show me your workings, not someone else’s. We want to pin down the AI’s contribution, understand it, maybe even hold it accountable for what it says. As these systems get more entangled with our lives, that accountability and transparency become critical. If my AI doctor gives me advice, I’d kind of prefer it not just copy WebMD, but actually synthesize my personal health data with general medical knowledge and explain the reasoning. If my AI friend/therapist is cheering me up, I’d like to think it’s drawing on what it knows about me, not spouting Hallmark card quotes from the internet (otherwise I might as well just read inspirational quotes on Pinterest).
In the end, the prompt “rely on your own knowledge” speaks to a deeper wish: that these artificial minds can develop something like an inner voice or understanding, not just be puppets dangling from the internet’s strings. We’re personifying the machine (“your knowledge”) in order to elevate it to the level of a creative equal—or at least a worthy assistant. And by doing so, we’re forced to reflect on our own knowledge and creativity. If an AI can meet that challenge—producing insightful, context-aware work from its stored “knowledge” alone—then the line between human and machine intellect shifts a little more.
Perhaps one day soon, we won’t distinguish much between an AI looking something up versus recalling it from training any more than we distinguish between remembering something and referring to notes in ourselves. The AI’s knowledge will be our knowledge, seamlessly integrated. And the notion of “browsing the net” versus “knowing” will collapse for AI, just as the boundary between my brain and my notebook has already collapsed for me when I consider my total memory resources
en.wikipedia.org. We’ll have to find new ways to define originality, when every creation is a collaboration between human imagination and machine aggregation. New ways to define authorship, when an “author” might be a composite of multiple humans and AI editors. New ways to find common ground, when each person’s reality is co-crafted by their AI to their tastes.
It’s a wild ride ahead. We started with a simple request to not cheat by browsing, and we end up questioning the nature of creativity and self in an age of AI. One thing is certain: we’re going to learn a lot about ourselves by the way we interact with these machines. As the Reddit commenter said, “Maybe it won’t take too long for us all to find out.”
reddit.com Until then, the next time you see an eerily well-written post online and wonder if it was human or AI, remember—it might be both. And the truly interesting question isn’t who wrote it, but rather: does it really matter?
Leave a Reply