
Thoughtful stories for thoughtless times.
Longreads has published hundreds of original stories—personal essays, reported features, reading lists, and more—and more than 14,000 editor’s picks. And they’re all funded by readers like you. Become a member today.
Tim Requarth| Longreads | April 9, 2026 | 18 minutes (5,003 words)
How does the brain decide what’s real? It’s a question most of us never have to ask. Our memories feel like records—imperfect, sure, but records nonetheless. We trust them to tell us where we’ve been, what we’ve done, who we are. But that trust rests on neural machinery we can’t access, reality-sorting processes that operate beneath conscious awareness.
We’ve been fortunate to publish Tim Requarth in the past. Please be sure to check out “The Final Five Percent.” The piece won a 2020 Science in Society Journalism Award and was anthologized in The Best American Science and Nature Writing in 2020.
My wife insists we once took a yoga class together, early in our relationship. She remembers the teacher vividly (a French acrobat, rainbow dreads, apparently quite a character), where we sat (to the left of the door), and the color of the yoga mats (teal). I insist she is misremembering: I have never been to a yoga class, even to this day. I scrolled back years through my phone’s location history once to settle it, but we’d started dating not long after the iPhone came out, and if the data ever existed, it was gone. The yoga story comes up every few years, but we never resolve it. It is probably unresolvable. As a neuroscientist, I know how these things happen—the encoding mishaps, the source confusion, the neuroscience of how two people can end up telling different stories about the same afternoon. This knowledge has never once brought us closer to agreeing.
I was thinking about this story when I heard something strange from a neighborhood friend of mine, Andrew Deutsch, who was using OpenAI’s Sora app. Sora, if you aren’t familiar, worked like this: You would record your face, say a few numbers, rotate your head left to right. Moments later, you would have an AI video replica of yourself, a self-deepfake, insertable into any scenario you can prompt the AI to produce. Scuba diving with SpongeBob. Dancing K-pop style in a futuristic cityscape. You could then share your videos with your friends and scroll through the videos of others, in what is often described as a “TikTok for deepfakes.” Sora hit one million downloads in only five days. Six months later, OpenAI shut it down, reportedly redirecting resources toward coding tools ahead of a planned IPO. Consider this, then, a eulogy for Sora, a technology with the lifespan of an off-Broadway flop that, in its brief and ignominious run, exposed a crack in human cognition that the next self-deepfake app will surely exploit.
I’d had an early invite to try it for weeks, but couldn’t quite bring myself to open it. Deutsch, on the other hand, had been using Sora heavily. He’s worked in animation and augmented reality for 20 years; he’s both interested in and not easily duped by tech. But he’d been using Sora to make AI videos of himself doing things he’s never done. And now he’s having trouble. Not with the videos. With his memory.
He created an AI-generated video of himself scaling Mount Rushmore and watched it several times. Then, a few weeks later, he was getting his dog ready for a walk. He felt a flicker of recollection, of that time he’d climbed Mount Rushmore. “I felt just this twitch of confusion about it. It felt like a memory, very faintly.”
Not a full memory, exactly. But not not a memory either.
A memory twitch like this might not sound alarming, especially considering the many dangers of deepfake technology like Sora. Within hours of its public release, users were generating videos of mass shootings, copyrighted characters promoting crypto scams, SpongeBob dressed as Hitler. Misinformation, slop, harassment. Those are real problems. But something subtler and eerier is going on with Deutsch. His minor but real neurological glitch is a sentinel signal that technologies like Sora are capable of interfering with some of the brain’s root processes. Things like autobiographical memories, which form the raw material of identity. Things like how the brain determines whether a thought is a memory based in reality, or not. Sora was just the first app that let you deepfake yourself; I suspect it won’t be the last. I wanted to understand what was happening in the brain—and what it means that a free app on your phone can now manufacture, in seconds, the kind of mental imagery the brain is least equipped to reject.
To get a sense of what might be going on in Deutsch’s brain, I called up Elizabeth Loftus, the psychologist who made “false memory” a household word. Her famous 1995 “lost in the mall” study convinced people they’d been lost in a shopping mall as children, using nothing more than a fabricated paragraph slipped in among real family memories. More recently, she teamed up with MIT’s Media Lab to show that AI-generated videos from AI-edited images could double false memory rates.
When I described what Deutsch had experienced, she wasn’t surprised. Exposure to AI-generated images or video, she said, could absolutely contaminate memory.
She was intrigued. Most deepfake research, including her recent work with MIT Media Lab, focuses on memories about other people or events. The concern is misinformation. You see an AI-altered image of a politician, and later you misremember what they did. At scale, chaos ensues. What Sora enabled was different: false memories about yourself. I’d first heard of Loftus’s work during a seminar on memory at Columbia, sitting in a cramped room at the Neurological Institute on 168th Street. The circumstances always seemed like edge cases—whether undergrads tricked by clever experimenters, or traumatized patients confused by leading questions. This was not a phenomenon I thought would be replicated by a short-form video AI slop app.
To understand why we’re so susceptible to false memories requires understanding that the brain doesn’t store memories the way a phone stores photos. When you live through something, your hippocampus— a deep brain structure vaguely shaped like a seahorse—encodes that experience by binding together its constituent pieces: what you saw, what you heard, where you were, how you felt. That bound-together pattern is the memory. Over hours and days, the hippocampus replays these patterns, perhaps while you sleep, gradually strengthening their hold in the cortex, in a process called consolidation. What makes these memories so unlike phone storage, and especially relevant here, is that recalling a memory means the brain must partially relive it. The brain recalls by reactivating some of the same sensory and spatial patterns that were present during the original experience. Your brain doesn’t access a stable, static stored memory of yourself at that summer picnic in the park; your brain recreates it by activating some of the same neural circuitry that fired when you were actually squinting in the sun, actually wiggling your toes in the warmed grass. During recall, it fires again, faintly.
The beauty of memory, not as a static storage bank but as a dynamic process of on-demand re-creation, is that it’s efficient. You can access a tremendous amount of information about your past without having to dedicate special storage space to your personal archive. But that efficiency comes with risks. Each time you replay and reconsolidate a memory, it can subtly change. Other things you’re thinking about during recall, how you feel while recalling it, other, similar memories that activate similar patterns of neurons, these can mix and mingle and, ultimately, change the reconsolidation of the original memory itself. And once changed, it doesn’t revert because there is no gold-standard stored version. There is only the latest replay. And because memories are, essentially, reactivations of specific patterns of sensory and other neural activity, that means that sensory patterns alone can get consolidated as memories. This is a false memory. And a false memory, once seeded, benefits from the same machinery as real ones. And the brain’s fact-checker, the prefrontal cortex, arrives late to the scene: the reactivation of sensory and other neural pathways is already underway, the memory reconstruction already in progress, before any evaluation of whether the memory is genuine even begins.
Applying any of this to a deepfake app is uncharted territory, but talking to Loftus, I started to see how false memory science might apply. The passage of time would likely be important. Initially, a person might remember creating a specific video, and the mind could reject the contents as false. But if the memory of creation fades while the contents persist, the pre-frontal fact-checking defenses begin to disappear. The false memory is more likely to feel real. False memories would probably strengthen with repeated exposure to the video—the illusory truth effect shows that repetition makes false claims feel truer, and while studies of false autobiographical memory have mostly involved active suggestion rather than passive viewing, those using multiple sessions consistently produced stronger effects. So false Sora memories would probably strengthen with repeated exposure to the video—essentially, Sora would be stimulating replay processes in the brain, helping to further consolidate the false memory. And knowing they are AI generated may not matter: In Loftus’s MIT study, labeling content as “AI-enhanced” didn’t prevent false memory formation. We would probably tend to defer to AI-generated videos simply because they resemble the kind of external record we’re used to treating as incontrovertible truth. Sora capitalized on every one of these dynamics: synthetic video of yourself, in your pocket and infinitely rewatchable, stealthily inheriting the authority already granted to the phone’s camera roll.
To illustrate how these forces compound, Loftus offered her own poignant memory. “My house burned down in a large fire in Los Angeles. This is when I was in high school. But it happened to have appeared in a magazine—there were photographs.” She’d consulted that magazine repeatedly over the years, the way Deutsch might end up returning to his Mount Rushmore video. “And my entire memories are just what’s in this magazine. Now, if you asked me anything else that isn’t a picture here, I think I’d have trouble telling you.”
The external record of an event, repeatedly visited, becomes what you remember. This strikes me as why labels and tech literacy can only go so far in protecting our minds from what Loftus’s MIT study calls “synthetic memories” or memories implanted by AI of events that never occurred. You can know exactly how the trick works and still fall for it because metacognition doesn’t override encoding. The kinds of proposed fixes I’ve heard of—things like labels, disclaimers, AI literacy initiatives—will probably help but only partially, because they assume that knowing is enough, that we have a level of conscious awareness of, and control over, memory formation that, biology suggests, we simply don’t have.
Of course, there’s one big difference between Loftus’s memory of the house fire and Deutsch’s fanciful scaling of Lincoln’s nose. One was real, the other wasn’t. Not only unreal, but unlikely. “I would make a distinction between something that’s plausible and implausible,” she said. “If suddenly there’s a picture of you in a Russian prison in Siberia and you’ve never been, you’re obviously going to be able to reject it. Maybe you’ll have a weird feeling seeing yourself, but you’re just going to know you’ve never been.”
Fair enough. I started to think that maybe it’s a stretch to say that Deutsch’s “twitch of confusion” is anything to be concerned about. But then I talked to another Sora user and things got weirder.
Elena Piech is an interactive producer who has spent years building immersive experiences like virtual reality for major entertainment and technology companies. She had been experimenting with AI video tools for months, but something about Sora was different. When we talked, she was trying to pin down what exactly was happening when she watched herself in imaginary scenes. She gave an example: a video of her avatar watching a huge screen overlooking a Blade Runner-style futuristic city.
She said she could describe the panorama of that scene, what it felt like to be there, overlooking the city, even though the place doesn’t actually exist, and that she knows she couldn’t have visited.
What Piech was describing, I realized, involved spatial memory: the brain’s capacity to encode and reconstruct the three-dimensional layout of environments you’ve inhabited. When you remember your childhood bedroom, you don’t just recall an image; you can mentally rotate through the space, sense where the door was relative to the window, feel the room’s proportions. The hippocampus is central to this process, building what neuroscientists sometimes call cognitive maps—internal models of space constructed from actual navigation and sensory experience. Normally, fiction doesn’t produce this kind of encoding. Piech told me she’d recently started watching Friends for the first time and found she couldn’t do the same thing. The set stayed two-dimensional, flat, a place observed but never inhabited. But with her Sora generations, Piech said she could feel the spatial layout around her synthetic self, a sense of the panorama and depth of a Blade Runner cityscape that couldn’t possibly exist. She described it as a “3D mind map,” a visceral sense of the space she usually associates with places she’s actually been. If Deutsch had described a neural ripple, Piech was describing a wave—an electrochemical disturbance that didn’t dissipate but propagated until it lapped the shores of distant brain regions, setting off the encoding of spatial memories for places she’d never visited.
David Pillemer, an emeritus professor of psychology at University of New Hampshire who studies how specific moments lodge in memory and shape our lives, offered a clue. When a memory includes a visual image, he told me, the person remembering it is more likely to believe it actually happened. Seeing yourself in the scene is a hallmark of vivid memories.There’s an evolutionary logic to this, he explained. “If your life was in danger 5,000 years ago and you were at the water hole and the tiger came up, if you have a visual image of what happened, it’s good to not only hold that image, but believe the image, trust it. You’ll avoid that water hole.” The visual doesn’t just record experience; it confers credibility. I thought about the yoga teacher—the French acrobat with dreads, the studio, the spot where my wife says we sat. Her evidence was a lifelike mental image. Mine was an argument. Pillemer had just told me which one the brain trusts. And that ancient trust, calibrated over thousands of generations to actual waterholes and actual predators, doesn’t have a mechanism to determine whether the image was rendered on a server farm.
Piech’s experience suggests that Sora videos could activate spatial memory, meaning that Sora videos also tripped up the brain’s more fundamental systems for sorting real from imagined. “Although it may be disconcerting to contemplate,” as cognitive psychologist Marcia K. Johnson wrote in a 2006 paper, “true and false memories arise in the same way. Memories are attributions that we make about our mental experiences based on their subjective qualities, our prior knowledge and beliefs, our motives and goals, and the social context.” Johnson’s work on source monitoring, which is the brain’s process for sorting reality from imagination, revealed there’s no tag, no stamp in the brain that says this actually happened. Instead, a scene’s qualities during recall—how vivid it is, how spatially coherent, whether it arrives unbidden or requires effort to reconstruct—are what make it feel real or imagined. Memories of actual events are usually richer, more embedded in space and context. Imagined scenes, or recollections of scenes from movies, tend to feel thinner, more schematic. But the distributions overlap, and the brain relies on these imperfect cues to sort memory from imagination.
The trouble is that these cues can mislead. If remembering a synthetic experience activates the brain just widely enough—rich perceptual detail, spatial depth, the feeling of having been somewhere, of having been with someone—it stops registering as fantasy and starts registering as memory. Piech’s recollection of Sora generations were arriving with enough of those qualities to blur the distinction.
I expected that the fantastical or outlandish videos would have unsettled Piech the most, but that wasn’t the case. Most unsettling were the videos she asked to be set in her apartment, which Sora had apparently extrapolated from the background of her initialization video: a glimpse of TV, two picture frames on the wall, enough for the model to generate something that felt, as she put it, “65 percent there.” The wall colors roughly right, the TV in the right place, the pictures close enough. Her first instinct was that OpenAI had somehow accessed her camera roll. They hadn’t; Sora had just guessed well enough—presumably from the selfie snippets she used to initialize the app—to briefly fool her about her own living room.
This is the plausibility threshold Loftus was pointing out. A Russian prison is easy to reject—there are other, more systematic cognitive processes that check what feels familiar against what you know, and you know you’ve never been to Russia. But your own apartment at 65% fidelity sits in a zone of ambiguity. It activates familiarity circuits, which run through the perirhinal cortex and operate partly beneath conscious awareness. When Piech’s Sora-generated apartment matched enough features of her actual living room—TV placement, wall color, the pictures close enough—it activated perirhinal neurons, recruiting enough neural corroboration to slip past whatever rational defenses would reject it as synthetic. It started to feel real.
Then there’s what happened with the jet-ski video. Piech and Deutsch know each other, and Sora let users grant permission to appear in each other’s generations, so Deutsch made a video of the two of them jet-skiing on the East River in a gang called the Barracudas, talking smack to tourists on the ferry. Piech laughed when she watched it, but she also had an odd sensation. “It’s weird describing this to you,” she said. “Obviously it’s just a video. But it kind of does feel like—oh yeah, we hung out. Somehow my brain’s like, yep, that’s a social interaction.” The neural wave had propagated further, reaching brain areas that encode not just place but social connection.
I don’t know what to make of all this. A faint spatial memory of a place that doesn’t exist, a glimmer of social connection from an interaction that never happened—these might seem like neurological curios, oddities to file away in an academic’s desk but nothing to spill 5,000 words over. But I find myself unsettled in a way I can’t quite shake. Sora wasn’t just producing AI slop. The neural systems being activated here—the ones that register social connection, that lay down the raw material of who you are, that sort real from imagined—aren’t supposed to be accessed by a random app. They’re supposed to require actual experience. And yet.
I’m not going to tell you how worried you should be. But I want to think through what this might mean.
One potential consequence is how these tools could shape identity, at scale. I was particularly taken by a term Deutsch coined: propagandi, or propaganda directed at yourself. If propaganda works by shaping collective memory, propagandi is more atomized, more intimate. You’re the propagandist and the mark, constructing a version of yourself that doesn’t exist, for an audience of one. I called Northwestern University psychologist Dan McAdams to help me stress-test Deutsch’s speculation. McAdams developed the influential concept of narrative identity—the idea that identity is built from autobiographical memories, that the self you’ll be tomorrow is constructed from the memories you have today. Contaminate the memories, and the identity may shift. When I described what Sora users like Deutsch were experiencing, McAdams said he hadn’t heard of the phenomenon yet, “but a moment’s reflection suggests that it is inevitable.” These AI videos could “ultimately be encoded and reworked as ‘things that happened to me,’ and then perhaps ‘important things that happened to me that are now part of my life story.’” Propagandi, in other words, isn’t just a clever coinage. It names a mechanism for rewriting who you are.
A hopeful read isn’t hard to find. Piech made a K-pop dance video of herself, fluid and confident, moving in ways she can’t. After watching it a few times, she told me, she started to feel like maybe she actually could. Athletes have used visualization for decades; maybe Sora was just a more vivid format. Therapists working with trauma have long known that memory can be beneficially malleable; perhaps tools like Sora, carefully deployed, could help people revise the scenes that haunt them.
But consider who’s building these tools. OpenAI confirmed that user prompts and outputs trained the model by default; meanwhile, videos which were saved, shared, or regenerated almost certainly shaped the feed. Users save confident-looking videos and regenerate awkward ones. Across millions of interactions, the system drifts towards flattery. More than a decade of social media research has documented the harm of exposure to idealized images of others. But there’s always been an escape hatch: The comparison is to someone else. The escape hatch works because comparison requires holding self and other apart. What happens when the idealized image is you? What the memory research suggests is that Sora generations could have, with time, slipped beneath that defense. The gap between your real autobiography and your synthetically infused, commercially tainted one drives a pervasive sense of inadequacy, as your actual life fails to live up to a narrative identity that was never yours to begin with. “What people could do with marketing and this technology is making a lot of people salivate right now,” Deutsch wearily noted.
Imagine how this plays out for a 17-year-old girl who’s been on the app for months. She’s given it her face, her voice, her mannerisms. The app knows from her browsing that she’s been looking at prom dresses. But the video that appears in her Sora feed isn’t anything special—it’s just her, in her own bedroom, getting ready for school on what looks like a normal morning, wearing a dress from a brand she can’t quite afford. The synthetic version of her isn’t doing anything extraordinary. She just looks like herself on a slightly better day. Skin a little clearer, hair a little more together. The dress, by the way, is tagged and purchasable. On Instagram, in viewing someone else’s photos, she’d be comparing herself to someone else, and there are more psychological defenses against that: The other person’s feed is curated, it’s not real life, and so the comparison isn’t fair. This defense doesn’t always work, but it’s there. Sora was different. The feeling isn’t envy. It’s closer to confusion. Not I wish I looked like her but Why don’t I look like that? or even more insidious, Why don’t I look like myself? And if she’s been watching these videos for months, memory research suggests that remembering the AI videos won’t register as being memories of AI videos. They’ll feel vaguely like mornings she half-remembers, days when things just came together a little more easily. Each actual morning, in her actual mirror, will begin to always feel like an off day, which is precisely the feeling the whole experience was engineered to produce, and precisely the feeling a “Buy” button is eagerly positioned to resolve.
But something else was nagging at me, in addition to the potential psychological consequences: Even something as intimate as autobiographical memory doesn’t form in isolation. It’s fundamentally social. In a process scientists endearingly call maternal reminiscing, children learn to shape experience into story through dialogue with caregivers, a process that continues throughout life: the friend who leans in or looks skeptical, the partner who remembers it differently, the listener who asks a question that reframes the whole event. Even the distraction level of the listener can affect how well we remember our own memories. In one experiment, a psychologist had participants tell a story to a friend who was secretly distracted. A month later, the speakers remembered their own experience less well simply because of how a listener behaved during their retelling of it. The attentive listener isn’t just receiving the memory; they’re helping to construct it.
Now imagine referencing something your friend doesn’t share, because it never happened. The blank look. The awkward silence. You might question yourself, wondering if you imagined it. You might question them. Or you might learn to stop bringing it up altogether, retreating from actual human social interaction to more AI simulacra of human social interactions, which never push back, which always affirm. The false memory, born in isolation, produces isolation again when it enters conversation.
Piech seemed to be thinking about this, even if she wasn’t citing social psychology to back her intuitions up. Her partner was traveling for two months, and Piech found herself pondering whether she could use Sora to maintain a sense of connection—generate videos of them together, something to watch when she missed her partner. Then she thought through what that would actually mean: one person accumulating an archive of shared experiences the other had never seen, building memories of a relationship that only existed on one side. “What if I watched all of them and she didn’t watch them,” she said, “and now I’m referring to these things that she has no idea about?” She decided not to make the videos.
Piech stopped because she wondered what Sora might do to her relationship. I couldn’t even get started. I’d pulled up Sora’s App Store page at least twice, downloaded the app, then deleted it. I’d then toggle the invite text message back to “unread” and swear to think about it harder, later. I already had reasons not to download Sora—the usual ones, about data privacy and the general question of whether the world needs more AI-generated slop. Those are the reasons I’d give if you asked me at a dinner party. They’re rational, articulable, and they were all in place before Deutsch told me about his bizarre memory twitch.
What I likely underestimate is my own vulnerability to this technology. The psychological and neurological mechanisms that sort real from imagined—the ones Pillemer described, the ones Johnson mapped, the ones Piech felt activate while she watched herself in a city that doesn’t exist—don’t always check in with the part of you that might know better. They don’t consult your AI literacy or your PhD or your healthy skepticism about OpenAI’s privacy policy. They run underneath all of that, and they trust pictures.
The yoga class thing is amusing, but it truly is a simple question of whether my wife has a false memory, or I have forgotten something that really happened. I honestly have no idea which it is, and as confident as I am that I’m right, neuroscience suggests this confidence is unwarranted, and I accept that. There was a period, years later, when my wife and I had differing accounts of a consequential series of events. For a while, we couldn’t talk about the subject at all. Not a misremembered yoga instructor but a stretch of our life together that we had apparently lived through twice, once each, in parallel versions that couldn’t be reconciled. I won’t get into the details. This was somewhat different from the yoga instructor. Here, two people agreed they were present for the same events in real time, but experienced them differently (for all the reasons this happens—hunger, emotional state, past experiences, attention, etc). That difference in experience, in turn, led to different memories—the way we suppress certain details, cast an action in a different light. You could say one of us was remembering right and one of us was remembering wrong, but the reality is more complex. We were remembering things that happened the way memories are always made: subjectively and idiosyncratically. Memory, with time, is essentially storytelling. And with time, we settled into two separate narratives each replete with its own accompanying details.
The hardest part wasn’t the disagreement itself, but what it took away: the quotidian ability to say Remember when? and have the other person nod. You don’t realize how much of a relationship runs on the shared archive, the stuff you can both reach for without negotiation until it’s not there. We got through it, but what we arrived at wasn’t closure, the way a judge would rule on the facts of the case. It was something richer and truer to the human experience: that another person’s memory and experience can diverge from yours, and that to love someone is to accept rather than be threatened by this divergence—that a perfectly duplicate shared memory bank is not a prerequisite for building a life together.
My wife and I made our divergent narratives the old-fashioned way, with proximity and time and two brains encoding and decoding the same events differently. Sora did it in a closed loop between you and a screen. No one to push back, no one to say that isn’t what happened. By the time the memory enters a conversation with someone who actually shares your life, it has already hardened into something that feels like yours.
Some nights after our son is asleep, my wife and I sit on the couch and reconstruct the day for each other. What he said at breakfast, the weird thing he did with his yogurt spoon, whether the stalling tactics at bedtime were really that outlandish or whether we were both just tired. Sometimes we seem to disagree on the details, even if we were both there. She’ll point out something I didn’t notice, or I’ll interpret something we both noticed differently, or she’ll add a layer of interpretation by connecting his actions to similar actions the day before. The narrative shifts a little, adjusts a little to accommodate both of us, and by the time we’ve moved on we both begin to consolidate memories of something neither of us quite experienced—which, in the end, is the uncomfortable truth: Memory and experience are not synonymous. I used to think of this process as more akin to fact-checking, of sifting fact from embellishment, reality from interpretation. But it’s not quite that. It’s something more meaningful than checking facts: sitting there, remembering them together.
Tim Requarth is director of graduate science writing and research assistant professor of neuroscience at the NYU Grossman School of Medicine, where he studies how artificial intelligence is changing the way scientists think, learn, and write. He writes “The Third Hemisphere,” a newsletter that explores AI’s effects on cognition from a neuroscientist’s perspective. His essays and reporting have appeared in The New York Times, The Atlantic, and Slate, where he is a contributing writer.
Editor: Krista Stevens
Fact-checker: Julie Schwietert Collazo
Copyeditor: Cheri Lucas Rowlands
Don't Miss:
-
BANG TAO’S NIGHTMARE: THAI MEDIA EXPOSES ILLEGAL CLUBS, CORRUPT OFFICIALS, AND THE DEATH OF JORDAN WRIGHT—HERE’S HOW PHUKET’S NIGHTLIFE BECAME A KILLING FIELD
-
The Longreads Questionnaire, Featuring Patrick Radden Keefe
-
The Good Catholics of Buffalo
-
The Longreads Questionnaire, Featuring R. O. Kwon
-
The Age of Dinosaurs

Bhutan’s ‘Mindfulness City’ to Link to India by Rail
Cracks in the Sand: Gulf Monarchies’ Economic and Geopolitical Peril
Shipping Containers: New Threat Multiplier