Sanity by Design
0x41434f
Sanity by Design
A Systems Audit of the Psychiatric Care Stack
"The mental health problem is medical, but the solutions are not just medical. They're social, environmental, and political." — Dr. Thomas Insel, former Director of the NIMH
108/62
This year did not go the way I expected. I began it with hope and discipline, taking part in a forty-day prayer and fasting program at my church, believing it would be my best year. Instead, by summer, the life I had carefully built fell apart. Several of my worst fears happened one after another, and every backup plan I had failed. What surprised me was not the collapse itself, but my response. I felt calm even as everything was breaking. My blood pressure during that time never went above 108/62, the lowest reading of my adult life, even when everything I built was dissolving. My friend J always told me I did not know how to ask for help, and I saw he was right. I leaned on my therapist, and my family prayed for me, but I still had one question that kept turning in my mind. Why was I so calm while my world was collapsing, and why do we still lack clear answers about how the brain works when life overwhelms us?
That question shifted my path. I realized psychiatry does not yet have the same kind of objective measures that most other areas of medicine rely on. Diagnoses are based mostly on symptoms and what patients say, which makes treatment uncertain. My search for better answers drew me into a new field called computational psychiatry, which uses data, math, and technology to understand the brain more precisely. At the same time, I learned how important human care is in mental health. My collapse set me on a mission to bring these two sides together because I want to build objective tools that support psychiatry while keeping the human touch at its center.
I had been in a race against time, wanting to build and successfully exit a software company before Artificial General Intelligence changed everything. My team and I used the newest AI tools to build products that impressed people, getting meetings and praise but not enough paying customers. I do not think it was because I lacked skill, as I can design products and talk to customers. The real issue was that I did not have a strong network to sell into big companies, and I delayed talking to investors while waiting until I had a proven product. Looking back, I see that this habit may have slowed me down. When things started to go wrong, I tried every backup plan I had, but nothing worked. Within days, several of my deepest fears became real. I felt numb. The foundation of the life I knew cracked all at once. For the first time, I felt complete surrender because there was no immediate way out.
This experience pulled me into mental health because I wanted an answer. My search led me to read widely about psychology and psychiatry, and I found that therapy helps but has limits. In most areas of medicine, doctors rely on objective tests like blood work, imaging scans, or lab results, but psychiatry is different. Diagnosis depends mostly on symptoms and what patients report. The problem is that symptoms often overlap and self-reporting can be unreliable, which means diagnosis is slow, confusing, and sometimes wrong. Without objective markers, psychiatry today is in a place similar to medicine before the discovery of germs. This trial-and-error system makes treatment hard. Current classifications like the DSM are not very strong scientifically, which is why half of adults with depression do not improve on antidepressants or antipsychotics. Development of new drugs has stalled, with the last major advance being Clozapine in the 1980s. Psychiatric drug trials also face extra hurdles because unlike cancer drugs, which are judged against existing treatments, brain drugs must be judged against a placebo. This doubles the cost and lowers the chance of success.
This made me curious about new ways forward. I discovered computational psychiatry, which applies math, data, and computer models to study how the brain works when it is healthy and when it is not. It connects to a larger movement called Precision Psychiatry, which aims to improve diagnosis and treatment by using genetic tests, brain scans, and molecular data. The goal is to combine biological and behavioral measures with symptoms. If done well, this could give psychiatry the kind of objective data that other areas of medicine already have. But here I also saw the limits. Many AI start-ups promised to transform drug discovery but failed because the problem is not only the math, but that we still know very little about our own biology. Without enough reliable data, even the best models cannot perform well. Under pressure, some companies aimed at easy targets, which led to drugs that were no better than what already existed. This showed me that my ambition needs to be cautious and realistic.
To ground myself in reality, I decided to step into the field directly. I now work part-time as a mental health worker while still building in tech. This gives me a close look at the daily struggles patients face and the challenges clinicians deal with. My goal is to create AI tools that support psychiatry rather than replace it. I want to build computational methods that help us better understand how the brain is organized and how it changes during illness. I also see how this connects to the workplace, where mental health problems cost the global economy nearly a trillion dollars each year. Real change happens when managers are trained to listen and make workloads manageable, not just when companies make public promises. This summer, my worst fears happened, yet in the middle of it all, I found calm. That calm led me to a new direction where my ambition is no longer solely about building with technology but about healing with people. I began this year with faith, and I carry that same faith into this mission. I want to give mental health what it has lacked for too long, which is answers that are both scientific and human.
The First 15 Minutes
Today marks my first day off orientation and the end of my first two weeks as a mental health worker. Looking back, I realize how little I truly understood about psychiatry before stepping onto these floors. Now I am certain that this is the most important work I can do in my lifetime. My path here was not direct. It began last summer when the life I had carefully built came apart. In the middle of that collapse, something strange happened. I felt a profound sense of calm, a feeling so unexpected it sent me down a rabbit hole of discovery. I read everything I could find about psychology and the brain, trying to understand my own response. That search led me to a core truth of psychiatry, which is that there are still so few objective answers for what happens inside our minds. This realization became the foundation of my personal mission to help find answers that are both scientific and human.
That curiosity pushed me toward the clinic. I enrolled in a medical assistant program where a simple exercise of taking blood pressure sparked an idea. We all struggled to distinguish the sounds through our stethoscopes and were getting different readings. It made me wonder about the tools we rely on. I researched building a better digital stethoscope and discovered Eko Health, a company born from the same realization that our most basic tools can be improved. This affirmed my belief that my technical skills could have a place in medicine. From there, the path became clearer. I earned a certification as a Direct Support Professional and began working in a residential facility for adults with developmental disabilities and mental illness. It was there, while giving medications, documenting behaviors, and seeing the direct impact of care, that I knew I wanted to be a clinician. I applied to two local psychiatric hospitals and landed here, where I am today.
As a mental health worker, I am one of the people closest to the patients. We are there for the 15-minute observation rounds, for the one-to-ones, and for assisting with therapeutic activities. I have also been involved in the admissions and discharge process. During admissions, I am one of the first people to interact with a patient at their most vulnerable. I perform skin checks for contraband, take their vitals, weigh and measure them, and inventory their personal belongings while the nurse assesses their condition. This process is a stark reminder of how much critical data, both clinical and personal, is gathered from the very first moment. In my first two weeks, I have been floated across all units, from the ICU to the specialized DDMI unit. That first day on the floor was a crystallizing moment. I saw people with such a wide range of behaviors and illnesses that I knew then I wanted to do more than just observe and pass messages to the nurses. I wanted to build tools that could help solve their problems.
One memory from this week stands out. A patient was in crisis, and when staff could not de-escalate the situation, they called a code grey. I stood outside the seclusion room as they redirected the patient, and I started to cry. I did not have a relationship with this patient, and it was my first time seeing them up close. I had seen redirections before without tearing up. I still do not fully understand my reaction, but I think it speaks to the profound weight of this work. It is a weight I am now privileged to share. My experience here has already reinforced what I have learned in my research. I see the frustration in patients who wait days to see a psychiatrist for only a few minutes. I see the nurses, who patients think are just administering injections and medications, buried under administrative work and care coordination. It makes the need for better systems and for tools that can bridge these gaps feel more urgent than ever.
Over the next six months, my goal is to absorb as much as I can. I want to become a licensed psychiatric technician to take on more direct clinical responsibility, particularly in administering medication. This will deepen my understanding of psychiatric pharmacology and patient care as I continue on my path toward medical school. I will continue to observe, to document, and to ask questions. Every observation, from a patient's frustration with the system to my own unexpected tears, is data. It is the human data that will guide me as I learn to build the tools that can one day support this vital, difficult, and deeply meaningful work.
792 Hours
It has been 99 days, or 792 hours, since I began working as a mental health worker on an acute psychiatric floor. I have floated to different units and worked with the entire demographic spectrum, including adolescents, youth, adults, geriatrics, and patients with dual diagnoses of mental health and developmental disabilities. In that time, I have participated in nearly twenty admissions. Most of these occur late at night or in the early morning hours before 6:00 AM. My experience with intake so far has been eye-opening because a dedicated admissions team handles all referrals, which come from everywhere including emergency rooms, group homes, and police. The team adds the patient's name to the electronic health records, and they simply appear on our unit's list, sometimes before they have even arrived physically.
When the patient arrives, the process is a clinical drill. We take vitals, ask the police or paramedics for the transfer details, and check them in. Then the mental health workers split tasks. One inventories belongings and sorts cash from contraband, while the other joins a nurse for the skin check. This requires asking the patient to cough, squat, and then patting them down to check for hidden items. We give them a unit gown and the choice of our socks or their own shoes, minus the shoelaces. We document any existing marks, sometimes with a photograph, so there are no liability questions later. During this entire process, the nurse asks questions about allergies, medications, and history. Then the patient is offered food and taken to a room. What is striking is who is not there. There is no social worker. There is no psychiatrist. The questions are not therapeutic; they are administrative. A doctor and social worker are assigned only after the intake is complete, creating a fundamental misalignment because the patient’s first contact with us is purely about risk management.
This clinical, non-therapeutic intake leads directly to a bigger issue because we assign rooms based on random bed availability. These rooms hold three patients at a time, and I have seen several patients unable to sleep, terrified of their roommates, or constantly begging for a room change. I cannot help but map my own recent history onto this system. When I was going through my own worries this summer, I had no thoughts of harming myself or others, but I was terrified that I had not met my personal goals. If I had walked in voluntarily, what good would it have done to put me in a room with roommates in active psychosis? Taking medications would not have solved my problems, and I would not have survived in that environment. Many of our patients are in similar situations. They are homeless, hungry, or grieving. They need a robust social safety net rather than just antipsychotics. I would put those patients in a dedicated room with a treatment plan focused on resources and support.
The inefficiency extends to our daily tracking. I recently learned that the hospital only adopted an Electronic Health Record system this year, so consequently, we operate in a fragile hybrid state. We administer medications using a computerized Medication Administration Record, which is a necessary safety guardrail, but we still document our 15-minute safety rounds on paper. I asked why we still use paper charts when we have an EMR. The nurse told me they had suggested using iPads for rounds in the past, but the idea was rejected because the hospital is a "full Windows shop" and resisted the integration. This is a classic systems failure. A rigid IT policy regarding operating systems is effectively blocking real-time clinical data collection. We are sacrificing patient safety for the convenience of the IT department.
The engineer in me sees the liability immediately. Paper rounds rely entirely on the honor system because there is no timestamp validation, meaning rounds can be backfilled or signed in bulk. We also manually calculate total sleep hours at the end of the shift, which is a basic calculation a system could perform instantly, yet we rely on tired staff to do the math, making the data prone to error. Later, these sheets are scanned into the patient’s digital record for the Utilization Review team, effectively turning critical data into a static image. We are creating a digital record that cannot be searched, audited, or analyzed for patterns. We are doing the work of data entry without gaining the intelligence of data analysis.
I discussed readmissions with a nurse, noting how many faces I recognized. She told me it is a revolving door of area hospitals. Patients go to every facility until they run out of insurance at one, then move to the next. Some have no insurance at all. The hospital covers this loss by maximizing revenue from the DDMI units, creating a perverse financial ecosystem where the stability of the institution depends on the instability of the patient population.
This tension is exacerbated by understaffing. We often have a census of up to 40 patients in one unit with only three nurses. This usually includes one charge nurse, one med nurse, and one other nurse along with maybe five mental health workers. This shortage directly influences our use of chemical restraints. I am conflicted about the emergency intramuscular injections. While intended for safety, they often feel like a substitute for staff. A May 2024 investigation by the New York Times confirmed this pattern, noting that in emergency settings, chemical cocktails are frequently used as a default tool to manage behavior in overwhelmed units. Crucially, the report noted that unlike physical restraints, this practice often exists in a regulatory data black hole. We are operating in a system where the most forceful intervention, sedating a human being against their will, is also the least tracked.
Even darker, recent reporting suggests an economic logic at play. A separate September 2024 New York Times investigation into Acadia Healthcare exposed a business model where patients are lured in with free assessments and then trapped to maximize insurance payouts. The investigation found that facilities would exaggerate symptoms, labeling a patient "combative" simply for asking to leave, to legally justify extending a hold. This creates a terrifying duality on the floor. Are we holding this patient because they are truly gravely disabled, or are we holding them because they still have insurance days left? When I see a generic treatment plan that effectively warehouses a patient for 14 days, I have to wonder if I am looking at a clinical timeline or a billing cycle. Treating mental illness is incredibly difficult, but there are glaring inefficiencies here. We are managing risk. We are not yet healing people.
The Heterogeneity Problem
I had this moment where I felt psychiatry was fake. I started trying to understand why insurance denies so many behavioral health claims and why hospital policies feel so rigid. I am not anti-science. I believe in science. However, after 62 days of working as a mental health worker, our current diagnostics and treatment of mental illness simply did not seem credible. I watched patients get diagnosed and saw the fact that medications often do not work, forcing a constant cycle of changing them or increasing the dosage. We currently operate under the Biopsychosocial Model, which sounds comprehensive but often devolves into what Dr. Nassir Ghaemi calls "lazy eclecticism," a list of ingredients without a recipe. We throw medications, therapy, and social work at every patient in equal measure hoping something sticks. But without stratification, without knowing if the primary driver is biological, chemical, or social, we are not treating the patient. We are just checking boxes. I was becoming heartbroken. I had come to this field full of optimism that I could use my computational skills to help, but now I was questioning the entire field.
Then, a few weeks ago, I interviewed for an unlicensed psychiatric technician position at a non-profit psychiatric hospital, and it changed my perspective. This facility is unique because they only admit patients without insurance. Based on the discussion, it seems they are already doing stratification perfectly. Their patients are primarily those considered gravely disabled and those with substance abuse issues. They try not to admit people with developmental disabilities or those who are a danger to self and others. This is not because they would turn them away, but because their focus is specific. Their psych beds are bought by the county and the state. I was impressed when they described their restraints. They do not do prone positions and mostly use chairs, taking the chair to wherever the patient is having an episode. It felt more humane. This interview was a refreshing validation that proved the hypothesis I was forming in my notebook was a practical, working model.
My hypothesis is that we need rigorous stratification. We need social stratification to separate those suffering from homelessness or trauma from those with organic brain disease. We need chemical stratification to distinguish substance-induced psychosis from endogenous disorders. Finally, we need biological stratification to look at the neuroanatomy itself. Mixing these populations on one floor creates chaos rather than healing. Critics might call this approach reductionist, and they would be right. But as Dr. Awais Aftab argues, we need to embrace Pragmatic Reductionism. We cannot treat "everything" at once. To solve a complex problem like psychosis, we must reduce it to an actionable mechanism. If the driver is homelessness, we reduce the problem to housing. If the driver is dopamine, we reduce it to medication. Stratification reduces the ambiguity of the patient so we can increase the precision of the care. We don't reduce the human; we reduce the noise.
My path to this idea was unconventional. I have always been interested in a wide range of topics. In high school, I was an art and humanities student against my placement in the science class because I wanted to study law and political economy. I love debating and was deeply analytical. I looked into the Philosophy, Politics, and Economy courses at Oxford, but I never got to study that. I came to the U.S. to study computer science instead. This background is why I think I see the floor differently. When I started working the night shift, I found I was always asking questions and trying to find answers as I observed the patients. I started writing on puzzle scratch papers. I realized I was dabbling in both philosophy and psychology, so I bought a small book to write my thoughts in. I was doing more than observing; I was analyzing the system.
This "lazy eclecticism" is often enforced by logistics. In my facility, there are eight psychiatrists for nearly 180 patients. I observed a doctor today spend exactly three minutes doing rounds on six patients. That is thirty seconds per person. In thirty seconds, you cannot perform stratification. You cannot analyze the social, chemical, and biological layers. You can only check a box and refill a prescription. We are not treating the patient's complexity. We are managing the doctor's schedule.
I know I am not a doctor or a nurse practitioner, but I can also argue that I spend more time with the patients than either of them, thanks to the 15-minute rounds and one-to-ones. My job is literally to observe and record behavior. From that vantage point, I see a disconnect. This is why the current system feels so broken. I know the Diagnostic and Statistical Manual of Mental Disorders provides a necessary common language for billing and categorization. Yet when I compare the meaning of those labels to the actual behaviors of the patients on the floor, they rarely check out. The patient's attitude, their culture, and their entire context seem to give different information. The DSM often feels like opinions rather than hard research. So I looked into the history of psychiatry and the DSM, and what I found backed up my hypothesis. The history of psychiatry is barbaric. The development of the DSM itself is problematic and often lacks scientific rigor. It helps explain why there have not been many breakthroughs in psychiatry for a long time. It is a system built on a flawed foundation.
I also see that mental illness is highly cultural. I see people admitted for behaviors that, back home, would get them tagged as "stubborn" and never placed in psychiatric care. The baseline for mental illness seems to vary by culture. More than that, I hold a very strong belief that mental illness is largely based on a lack of strong social support and the state of our socioeconomic system. This is why "medical" psychiatry feels so limited. The only things close to medical are the medications, the taking of vital signs, and the urine and blood draws. Even those seem to be mostly for ruling out pregnancy or other physical illnesses before and during admission. My observations are concrete. A patient was having an episode, and staff were trying to call code grey. I decided to try to de-escalate. I asked them why they thought they were having an episode. They said this always happens before their menstrual cycle. I told the charge nurse, but they said it was not correct. I could not say anything back to the nurse because it is outside my scope of practice. I know from being terminally online that it is common for some to make fun of women who say that they think their period is what is making them irritable. However, it is actually a category in the ICD-11 and even the DSM-5. I know this because I have read them from cover to cover. In this case, I could not help the patient even though I understood what they were saying.
I believe neuroscience is our most objective way to address these diagnostic issues, alongside the biopsychosocial element. We cannot reduce everything with mental illness to the brain alone. This is where my unique path comes together. Being a psychiatric technician will give me more stories and allow me to observe patients, but it doesn't pay enough to be a career. I could become a psychiatric nurse practitioner, but the educational training for doctors and nurses is completely different. I think I am sort of doing my residency now as a technician, just without the legal power to diagnose, treat, and prescribe medications. But I lack a lot of medical knowledge, especially neuroanatomy. I definitely need to go to medical school for that education. I do not want to be a clinician alone. I want to contribute to scientific research and build psychiatric technology tools. That non-profit hospital showed me that better systems are possible, but we need data to prove it. While I wait to pursue my medical education, I am using the tools I already have. My first step is computational. I am building a Python script to scrape a list of psych-approved hospitals in California and see what the patients' reviews are. I cannot possibly go and interview or get a job in every hospital, so this is a good starting point to gather data. I am done questioning the field. I am actively researching how to fix it.
The User Experience of the Locked Unit
It has been three weeks since my interview with the non-profit hospital. The process from the recruiter screen to the actual first in-person meeting took a while, and I have not heard back since. In this field, silence is common. It took a month to get my current position, so I am treating the wait as standard operating procedure rather than a rejection. While I wait, I am still on the floor at the acute hospital. Something in my perspective is shifting. I used to think my taste was limited to software products. I cared about how a button feels or how a user flows through an app. Recently, I have started thinking obsessively about physical architecture.
The hospital where I work was built over 30 years ago. It is a large facility with nine distinct units. I understand that it cannot easily be modernized, but the age of the building reveals the philosophy of its time. The design prioritizes containment over care. It is a hardened facility where everything is sealed. The primary design constraint is preventing liability by stopping patients from eloping and removing ligature risks where a patient could harm themselves. These are necessary safety features, but safety is not the same as therapy. I suspect the architecture itself acts like a hospital-acquired infection. Just as a patient can catch a germ in a medical unit, a psychiatric patient can contract new stress simply from the building's design.
The most glaring flaw is acoustic. In an acute unit, crises happen around the clock. Currently, when a patient has an episode, the entire unit participates in it. It does not matter if it is noon or midnight. I watch patients trying to read, rest, or just exist while someone down the hall is throwing their body weight against a door, screaming, or banging the walls. If I were a patient here, finding peace would be impossible. If I cannot find quiet, I cannot regulate my emotions. It creates a feedback loop where the environment causes chronic stress, which causes more outbursts, which causes more noise. This connects to a deeper design failure, which is the lack of a middle ground. A patient currently has two choices: the overstimulating, public dayroom or a cramped bedroom shared with strangers. There is no semi-private alcove where a patient can decompress without being in full isolation.
This architectural gap forces our hand. Because there is no soft timeout room, we often have to use the hard seclusion room for de-escalation. Even if we leave it unlocked and place a staff member outside on ligature watch, the optics are wrong. We are trying to help a patient find calm, but the only space available is a sterile, concrete box designed for containment. Worse, when space is this limited, chemical restraints become a default solution. I have observed that we often rely on emergency medication simply because we lack the physical space to let a patient walk off their agitation safely. If the architecture offered more movement and privacy, I believe we could reduce the number of medications and injections we administer.
Then there is the temperature. Patients routinely complain that it is freezing. Tonight alone, four patients were unable to sleep because of the cold, despite piling on multiple blankets. The HVAC system is controlled centrally by an engineering team that is not present during the night shift. This creates a bureaucratic deadlock where staff cannot adjust the thermostat to meet human needs. We are asking a destabilized brain to rest while the body is in thermal distress. This is not just discomfort. It is a physiological barrier to recovery.
Then there is the light. We are a locked facility, which means fresh air is rare. Patients only see the sun during scheduled activities. The color palette of the unit is drab. I give the facility credit for the individual reading lights above the beds, which is a small touch of autonomy. However, the overall lack of natural light disrupts circadian rhythms. We are asking people to heal their minds in a space that deprives them of the biological basics. They need quiet, regulation, and sunlight. I want to push the conversation beyond risk mitigation. We have solved the problem of how to keep patients inside the building. The next design challenge is how to make the inside of the building a place where recovery is actually possible.
The Entropy Trap
We often describe the psychiatric ward as a place of containment or safety, referring to it as a "therapeutic milieu." But if you analyze the acute unit as a system interrogating the inputs and outputs, it functions as something else entirely. It acts as a stochastic noise generator designed to maximize uncertainty.
For the past few months, I have observed the floor not just as a worker, but as a systems engineer, watching patients spiral not because of their pathology, but because of their environment. It looks like a fundamental collision between the neurobiology of the brain and the architecture of the facility.
To understand why the current model fails, we have to look at what the brain actually is. Modern computational neuroscience, led by thinkers like Karl Friston, suggests the brain is not a "thinking machine" so much as a predictive processing engine. Its primary biological imperative is to model the world and minimize "surprise," or in systems terms, to minimize entropy. The brain wants the internal model to match the external reality, meaning sanity is effectively just predictability.
Psychosis, by this definition, is a prediction error. It is a failure of the model where the brain cannot reconcile the sensory input, so it floods the system with dopamine to flag the discrepancy. A psychotic brain is a brain experiencing maximum internal entropy, acting like a gyroscope that has lost its center and is desperately looking for a pattern to latch onto.
The tragedy is what we do with this brain. We take this organ, which is already drowning in internal chaos, and we place it in the most unpredictable environment in modern society.
Consider the data inputs of the Locked Unit. The acoustics are hard and reflective, amplifying random screaming that occurs without warning. Roommates are swapped based on bed availability rather than compatibility. The lighting is artificial and static, detaching the patient from the circadian rhythm of the sun, and staff members interrupt sleep every 15 minutes for safety checks. This fragments the only biological homeostatic reset mechanism the brain has.
We are trying to stabilize a prediction error by feeding it more noise. This creates a recursive system error I call the Entropy Trap. The patient enters with high internal entropy, like Mania or Psychosis, and the unit responds with high external entropy, like noise and threat. The patient’s brain tries to predict the environment but fails, so it increases the gain on the error signal. It releases more dopamine to find the signal in the noise, which only increases the psychosis and hyper-vigilance. Suddenly, every slam of a door feels like a threat, the patient screams or hits, and we respond with chemical restraints.
We label this "non-compliance" or "agitation," but mechanically, it is a feedback loop. The architecture of the facility is actively fueling the pathophysiology of the disease. We are trying to put out a fire by suffocating it while the building itself pumps in oxygen.
I have likened this reaction to a dog attack. When a dog attacks, you do not think; you act, because your amygdala overrides your inhibition. In the acute unit, the dopamine flood creates a false dog. The chaos of the floor convinces the brain it is under attack, so the patient isn't choosing violence. They are reacting to a biological imperative driven by the sensory data we are feeding them.
I experienced a minor version of this system failure myself just days ago. I had gone two nights without proper sleep because I was working obsessively on a new project. My brain was running high on salience due to excitement and low on energy. While sleeping on a friend's couch, which is a new environment that keeps the brain partially alert, I began to dream. In the dream, I was explaining my project. But because my system was overheated, the biological firewall that usually paralyzes our muscles during sleep failed. The signal leaked through. I turned around physically and began speaking to my friend in reality. I was convinced I was still in the simulation. For a few seconds, I was technically psychotic. My internal model, the dream, overrode the external reality, the room. I was acting on data that wasn't there. If a healthy brain can glitch like this after just 48 hours of sleep deprivation and high focus, imagine the state of a patient who has been homeless, sleepless, and terrified for a month. We are all running on the same hardware. Their operating conditions are just infinitely worse.
This mechanism isn't theoretical; it is documented. In May 2024, the New York Times profiled Matthew Tuleja, a former Division I football player with OCD. He wasn't psychotic in the traditional sense; he was overwhelmed. When he was cornered in a small exam room by nine staff members, his brain didn't analyze the legal implications. It recognized a physical threat. His athletic training kicked in—the "hot system" overriding the "cool system" and he attempted to run through the line like a fullback finding a hole. The system read this biological survival response as "combativeness." They pinned him, cuffed him, and injected him with antipsychotics. He wasn't fighting because he was "bad"; he was fighting because the environment had spiked the entropy to a level where "Fight or Flight" was the only available logic. We took a brain in distress, surrounded it with threat, and then punished it for trying to survive.
I witnessed this mechanism play out in real-time just tonight. A patient who had been readmitted was assigned to a room with a roommate who was trying to sleep. The readmitted patient, known for chronic insomnia, was awake and talking, keeping the other person up. When I simply asked the roommate if they were okay or needed ear plugs, the readmitted patient immediately accused me of blaming them, interpreting my neutral check-in as a hostile implication that they were a burden. Their brain, flooded with the stress of readmission and the social friction of the shared room, had predicted a threat where there was none. They were in the middle of a dog attack. But thirty minutes later, something remarkable happened. The patient came back and apologized. This apology is the crucial data point. It proves that the aggression was not a character flaw but a transient hardware crash. Once the adrenaline flushed out of their system and the prefrontal cortex came back online, their theory of mind returned. They could see me again not as a threat, but as a person. The tragedy is that the facility created the conflict by placing an insomniac with a sleeper, increasing the social entropy of the room, and then we blamed the patient for reacting to the friction we engineered.
This brings us to the concept of Trauma-Informed Care. Clinicians often speak of this as the necessity of creating safety and trust for patients whose nervous systems have been rewired by abuse or neglect. But we often treat Trauma-Informed Care as a soft skill. We treat it as something staff do with their voices or manners. We ignore the fact that the building itself is often traumagenic.
A traumatized brain is biologically hypersensitive to unpredictability. It interprets sudden noise or social chaos as a threat. By placing a trauma survivor in a High-Entropy Unit, we are surrounding them with the very triggers their brain is desperate to avoid. Therefore, Entropy Reduction is the architectural prerequisite for Trauma-Informed Care. We cannot ask staff to create a sense of safety in a building designed to generate chaos.
The failure of modern psychiatry is that we attempt to treat the Biological Layer while ignoring the Environmental Layer. We treat the software bug but run it on corrupted hardware. We need to move toward what Dr. Ghaemi calls "Method-Based Psychiatry." If the problem is biological, like Dopamine, we use biological methods. But if the problem is environmental, like Entropy, we must use environmental methods. Trying to cure an entropy problem with a biological pill is not just ineffective; it is a category error.
A true therapeutic facility needs to be hyper-predictable. This does not mean total silence, which creates a safety risk because staff need to hear if a patient is in distress. We need to hear the signal while eliminating the noise. Currently, our hard walls and floors reflect every sound, amplifying a dropped book into a gunshot. A therapeutic design would use acoustic dampening materials that absorb echoes and soften impacts without blocking the cries for help. It would prioritize private spaces to remove the social threat variable of a volatile roommate and use circadian lighting to restore the temporal variable.
We cannot talk about stabilizing a patient until we stabilize the signal they are receiving. Until then, we are not healing them. We are just muting the noise we created.
The Ontological Error
The sound of a magnetic lock engaging on a psychiatric unit is final. It is the sharp, mechanical punctuation mark that separates a citizen with rights from a patient under the state’s control. In my 792 hours on the acute floor, I have heard that sound hundreds of times. Standing on the inside, observing the intake process, you realize that this click does not just represent a change in location; it represents a metaphysical shift where the person behind the door has been deemed to have lost their agency. California Senate Bill 331 proposes to change the rules for that lock by defining a "mental health disorder" in the Lanterman-Petris-Short Act as any condition listed in the current Diagnostic and Statistical Manual of Mental Disorders. To a politician, this looks like a housekeeping measure, but to a mental health worker, it is a category error. It confuses the map for the territory, and in doing so, it threatens to turn the definition of human liberty over to a private committee that never intended to write a constitution.
The DSM is a masterpiece of descriptive utility that allows a doctor in Tokyo to communicate with a doctor in Toronto about "Schizophrenia," but strictly speaking, the DSM is nominalist rather than realist. It does not describe natural kinds that exist in nature like gold or tuberculosis, but rather practical kinds, or clusters of symptoms that we have grouped together because they tend to appear together. The DSM is theory-neutral and does not care why you are depressed or look for the biological mechanism; it simply asks if you have five of these nine symptoms. SB 331 creates a terrifying equivalency by suggesting that if a person fits a descriptive cluster in a book, they have met the threshold for state intervention. Yet the DSM lists 265 conditions, including "Caffeine Withdrawal," "Restless Leg Syndrome," and "Adjustment Disorder." This expansion is a symptom of what critics call "Nosologomania," an obsession with classifying every variety of human distress as a distinct medical disorder. By tethering the legal definition of disorder to this ever-expanding catalog, the State is arguing that descriptions of distress are legally equivalent to states of incapacity. This is wrong because a checklist of behaviors is not the same thing as the loss of the self.
This is where my time on the floor collides with the philosophy of Dr. Thomas Fuchs. Dr. Fuchs argues that true mental illness is not just a software glitch or a wrong thought, but a fundamental disruption of "Being-in-the-world," a breakdown of the lived structure of time, space, and the body. I have seen this breakdown when a patient stands in the middle of the hallway, weeping not because they are sad but because they have lost the ability to synthesize time. For them, the future has collapsed into the present. They are not acting irrationally; they are inhabiting a different phenomenological reality where their agency is structurally impossible. That is the threshold for a loss of liberty. However, I also see patients who are poor, addicted, or eccentric. They might meet the DSM criteria for Substance Use Disorder or Antisocial Personality Disorder, but they have not lost their "Being-in-the-world." They are making rational, autonomous choices within a harsh environment. SB 331 ignores this distinction. By focusing on the Checklist rather than the Phenomenology, it creates a dragnet that allows the state to detain the struggling subject with the same force it uses for the dissolved subject.
We rely on the DSM because psychiatry is currently a science of observation, not calculation. We are stuck in the era of tinkering rather than limit thinking. In other fields of medicine, we have objective limits. We do not detain someone for looking diabetic; we detain them because their blood glucose is 40 mg/dL and they are in a coma. We have a biomarker that proves the biological mechanism has failed. In psychiatry, we lack these biomarkers. We do not yet have the computational tools to measure the prediction error in a schizophrenic brain or the reward function in an addicted brain, so we cannot mathematically prove that a person’s internal control system is broken. Because we lack these objective answers, we must rely on subjective observation, and because subjective observation is flawed, our legal standard for removing freedom must remain incredibly high.
This philosophical vagueness has concrete consequences. When the legal definition of disorder is loose, it becomes a playground for profit. If we cannot objectively prove that a patient is sick via a biomarker, then the diagnosis relies entirely on the narrative provided by the facility. As we have seen recently, that narrative can be engineered for revenue. In September 2024, the New York Times published an investigation into one of the nation's largest psychiatric hospital chains, detailing a systematic business model where patients were lured in with free assessments and then trapped to maximize insurance payouts. The investigation found that staff were pressured to exaggerate symptoms to justify holding patients against their will, documenting quiet patients as "withdrawn" and polite patients as "combative" if they asked to leave. The goal was not clinical stabilization but to extend the stay until the insurance authorization ran out. This is the danger of SB 331. By defining mental disorder as anything in the DSM, we are handing these corporations a massive expansion of their catchment area. If the law allows a facility to detain a citizen for Adjustment Disorder or Caffeine Withdrawal, we are effectively removing the legal guardrails that prevent false imprisonment, allowing the hospital to define the patient as inventory rather than as a human being. When the legal standard is subjective, the tie always goes to the house, and the house always wins.
Proponents of SB 331, including the California State Association of Psychiatrists, argue that we should trust clinicians and that excluding diagnoses from the law freezes medical practice. They posit that a doctor would never detain someone for Caffeine Withdrawal unless they were truly gravely disabled, and they argue that the DSM represents the current gold standard of medical knowledge. But this view is over a decade out of date. In 2013, Dr. Thomas Insel, then-Director of the National Institute of Mental Health, famously declared that the NIMH would pivot away from using the DSM for research. His reasoning was a system engineer’s critique: the DSM has reliability, meaning doctors agree on the label, but lacks validity, meaning the label does not map to a biological reality. Dr. Insel wrote simply that "biology never read that book." Science writer John Horgan calls this phenomenon "Neurocentrism," the mistaken belief that we can explain all human behavior by looking at neurons. In his critique of the field, Horgan notes that despite decades of hype, neuroscience has failed to produce a single unifying theory of the mind or a single biomarker for mental illness. Yet SB 331 asks us to codify this uncertainty into law, treating the soft descriptions of the DSM with the same legal weight as the hard laws of physics. As Horgan warns, we are confusing the biological hardware with the social software, assuming that every problem of living is a problem of the brain. If the primary funding body for psychiatric research in the United States does not trust the DSM to define biological reality, why is the State of California proposing to use it to define civil liberty? We are codifying a map that the cartographers have already admitted is flawed.
SB 331 tries to solve the difficulty of diagnosis by lowering the bar. Instead of waiting for science to provide objective limits, the law proposes we simply accept all subjective descriptions as valid grounds for detention. This creates a political danger, as the definition of who gets to be free is the most sacred duty of a democracy and should be defined by the Constitution and the People. The DSM is published by the American Psychiatric Association and revised by committees subject to academic trends, pharmaceutical lobbying, and cultural shifts. If California ties its civil liberty laws to the current edition of the DSM, it is outsourcing its sovereignty, effectively saying that if a private committee in Virginia decides next year that Prolonged Driving is a disorder, then a citizen in California can lose their freedom for driving too long. We are confusing a billing manual for a Bill of Rights.
This isn't just a theoretical debate because the consequences are measured in lives. A July 2024 study by the Federal Reserve Bank of New York analyzed the causal effects of involuntary hospitalization. The findings were devastating. For "marginal" patients, the exact population SB 331 aims to capture, involuntary commitment nearly doubled the probability of dying by suicide or overdose in the months following release. The study found that the disruption to the patient’s social stability, such as housing and employment, outweighed any clinical benefit. By widening the net to catch these marginal cases, we are not saving them. We are statistically increasing their mortality. We are destroying their social stratification to treat a biological condition we haven't even validated.
On the floor, amidst the noise and the paper charts, the difference between struggle and sickness is palpable. The staff knows it, and the patients know it. We do not need a wider net; we need a more precise instrument. Until we have the computational tools to objectively prove that a mind has lost its capacity for agency, and until we can stratify the biological break from the social struggle, we must not allow a book of symptoms to become a weapon of the state. We must protect the distinction between the map and the territory, because once the lock clicks shut, the territory is the only thing that matters.
The Psychiatric Transporter
I almost called out of work today. I was exhausted, worn down by the friction of juggling two jobs and two classes for my pre-med post-bacc. But I went in, and I am glad I did because tonight completed my full experience as a mental health worker. I performed the final duty of the role, which is transport. Acute psychiatric hospitals do not have emergency medical services. We are a closed loop designed for the mind, not the body. When a patient’s physiology fails, we have to ship them out to a partner hospital. My shift started at 11:00 PM, and by 11:15 PM, I was in the back of an ambulance.
The process began with the internist. Unlike the psychiatrists who diagnose based on speech and behavior, the internist looks at the blood. A patient had a swollen left leg. The blood work and vital signs raised a red flag, so the decision was made to transfer them to the ER. The ambulance crew arrived, took the vitals again, loaded the patient, and I sat in the back for the ride. It was my first time in an ambulance. The transition was sharp. We moved from the static, timeless air of the locked unit to the kinetic, high-stakes environment of emergency medicine.
Upon arrival at the ER, the difference in intake was immediately apparent. At our hospital, intake is an administrative drill of liability forms and property searches where the psychiatrist is often just a name assigned later. Here, the expert was waiting for us. The triage nurse demanded allergies, medications, and holds, even screening the patient for self-harm immediately. The doctor met the patient at the door before they were even transferred to the bed. He checked the heart rate and the legs immediately. There was no waiting for an assignment. The decision-maker was the first line of defense, scanning for immediate physiological threats before the patient even settled.
Then came the technology. In our psych unit, the only monitoring tools are human eyes and paper charts. In the ER, the room became a hub of sensors. First came the EKG technicians, hooking up cords to read the electrical rhythm of the heart. Then came the X-ray technician to image the chest. Then the phlebotomist for blood. Then the ultrasound technician for the leg. Then the CT scan. The patient was dehydrated and could not provide a urine sample, so they hung IV fluids immediately. It was a symphony of objective measurement. We were not guessing what was wrong; we were looking inside the machine.
As someone coming from tech, I recognized this setup immediately. In infrastructure engineering, we use dashboards like Grafana or Prometheus to visualize the health of a server in real-time. The ER is the biological equivalent. It is a high-fidelity observability stack. We were not guessing what was wrong; we were reading the logs. The diagnosis revealed the gap between Signs and Pathology. Everyone, including the nurses at the psych hospital, suspected a Deep Vein Thrombosis because the patient’s left calf was swollen, reddish, and warm. That was the sign. But the ultrasound revealed the truth. The actual blood clot was not in the swollen calf where we were all looking. It was high in the thigh, in a spot that looked perfectly normal from the outside. Furthermore, the chest X-ray and CT scan captured a Pulmonary Embolism in the lung. The patient had a life-threatening clot in their chest that was completely invisible to the naked eye. No amount of observation or 15-minute rounds would have found it. We needed the sensors.
However, I also observed that the Entropy Trap exists in medicine too, just in a different form. The ER doctor, armed with millions of dollars of diagnostic technology, was guilty of the same detachment I see on the psych floor. He spent forty seconds with the patient at the door. Later, he spent another thirty seconds to confirm the diagnosis and say nurses would bring meds. This was not because he didn't care. It was because the system’s metrics determine funding and efficiency. The sensors provided the data, so the conversation was deemed redundant. This confirms that efficiency can become its own form of entropy. The more efficient the system, the less human the interaction. We have the data to save the life, but we lose the human in the process. This validates the mission of my current side projects, specifically Psykicks. By handling the administrative load, we can allow the human connection to return.
This experience clarified my future trajectory. Standard psychiatry often ignores the body, treating the mind in a vacuum. Standard medicine often ignores the mind, treating the organ in a vacuum. I intend to operate at the interface. Consultation-Liaison Psychiatry is the clinical discipline that bridges this gap, and it will serve as the primary domain for my computational research. C-L Psychiatry operates in the data-richest environment of the hospital. By integrating labs, imaging, and psych history, it provides the high-fidelity signals my algorithms require. I view C-L not just as a medical specialty, but as the necessary hardware access to build the next generation of psychiatric software. In C-L psychiatry, you are forced to stratify constantly. You must decide if agitation is due to hypoxia, withdrawal, or fear. It is the role of the System Administrator for the hospital's behavioral health. We need the objective precision of the ultrasound combined with the phenomenological depth of the therapist. We need to see the clot in the brain, but we also need to hear the person in the bed.