This is a snapshot of my personal core beliefs around moral philosophy and identity, written primarily during LessOnline 2025.
Definitions
At any moment, your brain occupies a particular state which encodes your ideas, personality, values, and belief system (your “neural network”). We can call this configuration your "headspace."
Over time and across environments, some aspects of this state remain relatively fixed (fundamental beliefs, identity, nature), while others are more fluid (your day-to-day thoughts and decisions).
This headspace is generally malleable. Neuroplasticity declines when you're older but the brain is still quite plastic even through adulthood. Medication or psychedelic and traumatic experiences all affect neuroplasticity in various ways.
If you’re determinism-leaning, you may argue that for any exact headspace and an exact environment (your body, external stimuli, etc), you produce the same thoughts and actions, as is pre-determined by the system of the universe.
The closer you can approximate a model of the physical world and people, the more accurately you can understand and predict the behavior of your environment, and then use that as inputs into your headspace.
The closer you can approximate a model of someone else's headspace, the more accurately you can understand and predict their behavior, as you are effectively simulating their internal weights of their neural network. You might call this “cognitive empathy,” where “emotional empathy” is allowing their emotional state to propagate through your headspace.
Naturally, all approximations are lossy; “the map is not the territory.” There are too many complex effects, genetic quirks, subconscious dynamics that humans lack the knowledge and computational power to fully model everything. We often suffer from projection bias, when our model of someone else’s headspace is unconsciously biased by our own.
But often, our simplified models can be quite predictive of reality and other humans.
(More on headspaces: https://wustep.me/headspace)
A lens is a particular pattern of beliefs, values, or frameworks through which you perceive and evaluate the world. Lenses range from broad ideology (e.g. “Christianity” or "utilitarianism" or "physicalism") to narrow ideas ("charisma" or "agency"). Lenses can be thought of as specific patterns or blobs in a headspace that can be adopted and applied.
Philosophy is the practice of examining, constructing, and refining headspaces & lenses that help us understand the world and choose how to live in it.
My philosophy
These are the lenses that I most identify with.
Primary moral and philosophic lenses:
- Moral subjectivism: All moral judgments of “right” and “wrong” arise only when applying specific lenses and headspaces.
- ”Mary steals food from a market to feed her starving children.” ← may be wrong from a legalist lens, but net good from a utilitarian lens. From a system lens, the existence of this is probably indicative of a larger systemic issue and incentives.
- Some lenses are more "objectively" correct, because (1) they have extremely high predictive power in interacting with the "real" world (e.g. physics, mathematics), or (2) they best approximate the structures shared across most or all human headspaces (e.g. “the sky is blue”). But generally these ideas do not apply to moral judgements.
“All models are wrong, some are useful."
- Pragmatism: Evaluate beliefs, actions, and systems primarily by their consequences and effectiveness.
- Did it work? Was it useful? What goals did it accomplish? Did it result in better outcomes over time, for the individual or system?
- Language and thought is primarily best used for prediction, problem solving, and action, rather than representing reality (as we can primarily only operate with subjective realities).
- Act & Preference Utilitarianism: Evaluate actions by the utility they generate—specifically, how much they appear to satisfy individual preferences and contribute to human flourishing. This is weighted by both the intensity of those preferences, the timescale of their effects, and probabilistic risk.
- Modulated by longtermism: Favor actions by how they contribute to sustained flourishing of the individuals, communities, and civilization as a whole.
- Modulated by risk: Actions should be evaluated not just by outcomes, but by their probability-weighted potential value. A calculated risk that fails may still have been the right decision if it had sufficient upside and learning value.
- Well-being: physical & mental health, safety
- Happiness: moment-to-moment affect and overall life satisfaction
- Fulfillment: a sense of purpose, growth, coherence with one’s values
- Contribution: positive externalities created for others and for future selves
- Sometimes, rule utilitarianism is preferred instead as a shortcut, because it reduces decision fatigue and moral calculation costs.
On utility
Many view utility as an axis of pleasure and pain (or suffering). Over time, I’ve preferred to think of +utility as some more nuanced multi-dimensional vector of sustained flourishing, maximizing goals like:
An alternative lens is that every individual has individual goals and preferences that they’re optimizing for — which may include specific conceptualizations of well-being, happiness, fulfillment, contribution — and we should optimize for our projections of individuals’ goals, with some weight towards the ones we most agree with and want to propagate into the world.
I find myself agreeing with Viktor Frankl’s notion of “Suffering in and of itself is meaningless; we give our suffering meaning by the way in which we respond to it.” Often, suffering and pain leads to growth, while maximizing pleasure leads to greater expectations of pleasure but not what seems like optimal wellbeing.
We may also apply multipliers on our personal utility and those that are closer to us (family, friends, community, organization, country, etc.).
Naturally, exact calculations of utility are impossible. All we can ever do is really just do “vibes” calculations, based on coarse predictions from our internal models and approximations via empathy. In practice, that’s often “good enough” to choose the higher-utility outcomes.
See also:
TwitterEmmett Shear on Twitter / X

Emmett Shear on Twitter / X
I know that it’s obvious, almost definitional, but roll with me: how do we know is suffering bad? How would the world look different if suffering was good, instead? What would you have to observe to tell the difference? pic.twitter.com/1paF0Lx4ee— Emmett Shear (@eshear) June 12, 2025
Other key lenses:
- Growth mindset: Almost all traits (charisma, agency, engineering) are skills that can be learned and developed, like RPG stats. Just like Magnus Carlsen is 2800 at chess vs the engines operating at 3600, we can always communicate more effectively and find more alpha in every day towards our goals.
- Suboptimality: Generally, we are all profoundly suboptimal beings—operating far below the theoretical maxima of our cognitive, emotional, and behavioral potential. This should be viewed with moral neutrality, as this is a design constraint of being human. Progress is the pursuit of less-bad models and more aligned actions over time.
- Agency is our internal capacity to identify desirable futures and take intentional action toward them—despite uncertainty, friction, or external constraints. Or, in other words, it’s the ability to intentionally move within and reshape our headspaces and external environments . Agency is a learnable skill.
- Openness also plays a significant role in the ability of one's headspace to flexibly explore and adopt new ideas. If you don't understand or agree with an idea, you haven't yet navigated to the headspace where that idea makes sense. Having high baseline openness allows you to consider a wider range of headspaces and more effectively grow and adapt your primary headspace.
- Systems thinking: All phenomena are nodes within larger, interconnected systems with feedback loops, constraints, and emergent behavior. By understanding how systems and incentives function, you can predict outcomes, identify failure points, and change behaviors.
- Identity detachment (or "ego death"): Identity is a construct. One can unbind themselves from the fixed narratives about who they are — what spaces their headspace must inhabit. By relaxing the grip on the illusion of a fixed self, we can evolve towards more optimal versions of ourselves and gain more openness and resilience against ego challenges.
- Rationalism: Beliefs are maps that can be updated with clearer evidence and reason. This lens prioritizes coherence, truth-seeking, and predictive accuracy. It’s useful for modeling reality, debugging bad thinking, and making decisions under uncertainty.
- Meta lens orchestration: We can consciously identify, switch between, and combine different lenses—while refining, discarding, and exploring new ones—to optimally navigate headspaces toward better outcomes.
- Lenses will often conflict with one another. While there's no perfect way to resolve these conflicts, we may use tie-breakers like asking "How does each option align with my goals?" or "Which option appears most utilitarian and has the highest expected value?" with a reasonable timebox.
- In reality, most decisions emerge from intuition and gut feelings shaped by our current headspace. The way we integrate different lenses molds this headspace ahead of time, helping to refine our intuition.
- Minimalism: Attention is a scarce resource. Reducing clutter (mental, physical, digital, emotional noise and friction) that does not serve you allows you to focus on what actually matters. Complexity is cost.
- Osmosis: We subconsciously absorb properties of the headspaces of others we spend time with and the content we consume.
- Other people’s language, values, and ideas infuse with our own, as mimetic behavior is evolutionarily adaptive. Books, podcasts, and media are powerful for the headspaces they help you inhabit. They can act as simulations as you get to “try on” new spaces.
- Neurochemistry & Evolutionary Psychology: Our bodies and minds operate through systems we can understand and influence. Learning about dopamine, emotions, and other neural mechanisms helps us better regulate our headspaces.
- Taste: Taste is our refined sense of what's meaningful and worth pursuing. It's a filter built from our aesthetic, moral, and cultural values, helping us predict what will resonate and achieve our goals and preferences (whether that’s movies, furniture, or daily decisions).
- Resonance: Others find your vector appealing.
- Articulation: You can clearly state what you’re optimizing for and why.
- Execution: You can consistently make decisions that optimize for this vector in context.
On loss functions and “good taste”
Good taste may be understood as a form of loss function optimization.
If your personal taste can be represented as a “preference & goals vector” across dimensions, then “good taste” is the combination of:
Others like your loss function. → “My friends generally like the books & movies I’ve recommended to them.”
“This is what my loss function measures.” → “I can explain to you what qualities I’m looking for and what goals I want to achieve in redesigning my apartment.”
“I can apply my loss function well.” → “I made intuitive & beautiful design choices in this UI I’m building.”
- Mathematics & Statistics: Math and statistics are lenses for modeling structure, uncertainty, and pattern. Mathematics encodes what is logically consistent, compressing complex systems into elegant, generalizable forms. Statistics extends this by grappling with incomplete information, allowing us to reason under uncertainty, update beliefs, and detect signal amidst noise.
Goals:
My goals fluctuate from time-to-time, but currently these boil down to (in no particular order):
- Live a life of novelty, curiosity, play, exploration & creation, intellectual stimulation, and positive physical & mental well-being.
- Contribute to a more productive, flourishing society through building tools or taking actions that meaningfully increase people’s agency and self-actualization.
- Positively affect those around me to be closer towards their aspirational headspaces.
- Find a life partner, have great kids, take care of my family, friends, and communities.
Other thoughts:
On determinism, free will, and agency
I'm agnostic about whether the universe is fully deterministic, but these things seem evident to me:
- Regardless of what is true, it appears optimal to believe in free will and agency.
- Most of the successful people I've met seem to have a much stronger focus in agency and exercise this belief. Most of the less successful people I've met appear to seem a much stronger focus on deterministic causes, blaming their fate to genetics, circumstance, or their childhood. While there’s certainly some survivorship bias or post-hoc rationalization at play, I believe there’s some causative power here in the internalized belief of agency itself influencing one’s outcomes positively through first-order and second-order impacts.
- Underestimation in one’s capability hurts incentive to invest in capability‑building. In many situations, it seems better to overestimate what one is capable of accomplishing and missing, rather than underestimating and not trying.
- It’s also quite difficult to pin down what “not having free will” means in practical terms, leading to categorical errors. Many people mistakenly equate determinism with a sense of helplessness.
- There are compatibalist determinism frames which I find reasonable, but I generally find the determinism lens less helpful than systems thinking and agency lenses.
- One simple formulation similar to Pascal’s Wager may be that: If there's a chance that free will exists, it's worth believing in it.
- If free will exists, you benefit from believing free will exists and exerting agency towards your intended outcomes.
- If free will doesn’t exist, you were always going to believe in free will anyways, as was predetermined.
On the categorical imperative
Kant’s categorical imperative is the idea that you should behave according to maxims that could be universal laws. While universalizable ethics is elegant in theory, this breaks down in practice due to asymmetric agency, specialization, and systemic slack.
For example, it may be optimal for me to basically never read national & political news and be politically uninformed, as my attention is better spent elsewhere. But naturally if everybody did this, this would not be good for society.
Generally, societies rely on division of cognitive labor, and not everybody can or should play every role. There are those who are more temperamentally inclined, better trained, or just motivated to engage in particular domains and it is not desirable for all agents to behave identically.
In other words, I don’t care about asking the question, “what if everybody did this?” and I prefer the more pragmatic stance of, “given the world as it is, what role is most effective for me to play?”
On toolbox vs law thinking
Most philosophic systems fall along a spectrum between Toolbox thinking and Law thinking.
Toolbox thinking:
- Gathers context-sensitive tools (heuristics, frames, beliefs, lenses) to adapt depending on the solution.
- No single "right" model
Law thinking:
- Seeks objective, universal truths that govern reality or prescribe ideal moral behavior.
- Believes there exists correct lenses to adopt that best model morals and the universe.
I treat lenses as tools. Some may be “lawful” in specific contexts, but none are absolute. The key is to build a flexible toolbox that adapts across different contexts.
One of the benefits of toolbox thinking is that you can integrate a new tool without it necessarily agreeing with or fitting in with all the other tools. But lawful thinking often requires much more systems coherence, making it often less flexible to change.
On effective altruism
I’m aligned with some effective altruism (EA) ideals but have some key disagreements.
I agree with:
- We should be intentional and analytical in the impact via our careers and finances. Career choices and capital are levers for outsized impact.
- Evaluating “impact-above-replacement” in jobs and charity scores are useful tools in utility-optimized decision making.
- Certain charities are far more effective than others, and organizations like GiveWell are excellent candidates to support.
- It’s quite important to be principled in giving, as it’s easy to rationalize deferring impact otherwise. I donate >10% of my salary every year to a Donor Advised Fund or to family.
Where I diverge is in is…
Personal well-being:
- Some EAs claim that you should maximize income through jobs like quantitative trading, then use that income to do the most good.
- But generally, if there’s not great resonance with your primary job, your life sucks more, and you may become less effective in the good you do in other ways. Impact can be seen as a product of effectiveness × longevity × motivation; clip any one factor and the total plummets.
- It seems optimal for EAs to put significant utility weights on personal wellbeing and resonance in their jobs, even if it’s at the cost of some altruistic impact.
Legibility bias:
- EA overweights interventions with legibilizable metrics (e.g., malaria nets) and underweights messy, hard-to-quantify work, like art, culture, institutional reform, and even productivity software.
- But these illegible activities constitute a major portion of what creates positive impact in the world, and they make life fundamentally richer.
Time-horizons:
- Early career EAs may be tempted to immediately find jobs or deploy capital in an altruistic way. But compounding skills, network, and credibility can 10x future leverage, and early career choices have a massive impact in career trajectory.
- You’ll often find entrepreneurs pivot into more altruistic ventures after they’ve landed an exit (e.g. Yishan Wong: reddit → Terraformation). This is an effective strategy for greater impact for many.
On knowing things
The rationalist lens tends to assume that more knowledge is always better—that a more accurate map leads to better outcomes.
The pragmatic lens disagrees, and instead argues the knowledge is only valuable if it’s on properly integrated and leads to better actions.
There generally appears to be a irregular curve to the utility of lens integration. When you first adopt a new lens, it may distort your worldview. You might over-identify with it, see it everywhere, or apply it counterproductively. For example, reading reports of ADHD on reddit may contribute to increasing learned helplessness of your ADHD, even if these anecdotes do give you a more accurate map of the realities that people are facing.
With some work, you can integrate knowledge with complementary lenses, reaching a more ideal headspace.
Reading Moral Mazes might initially make you overly cynical about institutions, interpreting every action as political maneuvering, making you less effective in navigating them. But when integrated with systems thinking and other lenses, you can arrive at a more nuanced view: Organizations are messy, incentive-driven systems; bureaucracies serve as meaningful risk-management strategies; and there always exists meaningful improvement opportunities in every organizational structure.
This is a sort of epistemic pragmatism: seek out what’s useful, integrate it fully, discard the rest, and curate your information intake with intention.
On religion
- Given evidence of our increasing ability to simulate lifeforms (from the Sims to LLMs), I think we're more likely than not living in a simulation. If our current trajectory suggests we'll be able to simulate significant aspects of humanity (consciousness, communication, human agents) in the coming years, it seems plausible that someone is already simulating us today. That said, I don’t think this belief changes my day-to-day actions whatsoever.
- If that’s the case, there is most likely some version of god(s) that exists — meaning being(s) that may be by human-standards close to omnipotent, omniscient, or omnipresent. Maybe they’re a future version of us, or humanity in some very similar, advanced universe. I don’t know — and this is probably not provable by anyone in our lifetimes, but it doesn’t change my day-to-day life. Agnosticism is probably the closet descriptor.
- I was raised Christian (till ~12) and then became more of a hardcore atheist (till ~22) and felt religion was largely net-negative, with consequences like discrimination, war, and ideological oppression. Over time I’ve realized religion has brought immense value to the world (meaning, community, norms, structure) that would’ve otherwise been hard to form in the absence of religion. For many people, organized religion (e.g. Christianity) is their best system to make sense of the world, and the fractures in religious ideology have probably led to a less stable state of society than if geographic clusters of people belonged to the same religion. I hope that we can evolve into a more pluralist society.
- There’s many ways in which society would be better by adopting elements of organized religion, like more shared routines and tradition, devotion towards values, discussion of moral philosophy, and communal spaces.
Things I probably should care more about
Here’s some things I am thinking about more that are probably underemphasized in this doc and my life. 🙂
- Physical embodiment, physicality, presence, movement, stillness.
- Humor & lightheartedness.
- Beauty and art as central ends.
- History & culture.
- The irrational and ineffable, spirituality, awe.
In a world of infinite competing headspaces, I’ve found this one to work pretty ok for me.
Thanks for peeking into a slice of my mind. 🙂