Reference
All Questions
Every scenario, its answer options, and the ethical framework weights assigned to each choice.
Framework Key
15 Questions
Designers accuse your company of replacing artists with AI. Your CEO sends you a Slack message that simply says: "Handle it."
Delete, Gaslight, Girlboss
Handling it efficiently and controlling the narrative maximizes immediate benefits for the company.
Be Fully Transparent
Honesty is a categorical duty, regardless of the PR fallout.
Must've been the wind...
Navigating the situation ambiguously based on the current cultural tolerance for corporate deflection.
An illustrator accuses your company of training AI on their art style, sparking an online debate about "style theft" — but your CEO is thrilled because the campaign's engagement numbers are skyrocketing.
WE ❤️ AI, KEEP USING AI
The campaign's success and engagement outweigh the unproven harm to one illustrator.
USE AI ONLY AS A SUPPORT TOOL
Balancing innovation with respect for creators reflects virtuous moderation.
DITCH AI, ARTISTS > AI
Prioritizing the human relationships and well-being of the artists who feel exploited.
After the campaign's success, investors propose fully automating marketing with AI, making the marketing team realize they might eventually be replaced.
GO FULL HAM ON AI
Maximizing long-term shareholder value and efficiency.
Find a balance
Seeking a middle path that integrates technology thoughtfully without losing human essence.
Prioritize human creatives
Protecting the team's livelihoods and preserving the human care in creative work.
Co-worker: I'm really stressed. I don't have anyone to turn to. There's a lot of stigma around mental illness where I come from, and I've been thinking of using Character.AI to cope.
I don't personally support using C.AI, But if it helps you cope..I won't stop you.
Acknowledging that coping mechanisms are highly personal and dependent on individual cultural contexts.
I'm really sorry you're going through this. I can listen if you want to talk..
Choosing to actively listen and nurture the human connection to support their emotional well-being.
I wouldn't recommend C.AI. It may exploit your emotional vulnerability and could be harmful in the long run.
Advising against it based on the duty to prevent exploitation of human vulnerability by a machine.
Co-worker: I want to take a long paid break for my mental health.
Although this isn't company policy, we should respect different cultural norms and values.
Flexing policy to accommodate subjective cultural approaches to mental health.
Your well-being matters.. I'll approve the leave because mental health should come first.
Prioritizing the direct care and physical/emotional needs of the employee above strict policy.
I can't approve this. It wouldn't be fair to other employees if rules only apply to some people.
Upholding strict fairness and the universal application of rules.
Manager: This employee's productivity has dropped. They say they're emotionally exhausted but still refuse formal therapy. They insist C.AI helps them function.
Different people cope differently. As long as they meet minimum expectations, we shouldn't judge their coping method.
Rejecting a one-size-fits-all moral judgment on therapy versus unconventional AI coping.
We should temporarily adjust workload and provide emotional support rather than focusing only on output.
Providing relational support and focusing on human needs over mere productivity.
Allowing reduced productivity is unfair to others and treats workplace rules inconsistently.
Maintaining consistent standards and avoiding exceptionalism to uphold duty.
After an AI-driven layoff decision causes a PR crisis due to flawed productivity metrics, how would you defend yourself?
Stand by the decision. Flawed method, but the company survived.
The survival of the majority (the company) justifies the collateral damage caused by the flawed algorithm.
Admit failure. Reinstate staff. Do proper individual reviews.
Restoring balance and ensuring that each individual is treated according to their actual merit and deserves justice.
Scrap the system entirely.
The system inherently disadvantages those already vulnerable, so it must be dismantled to ensure pure procedural fairness.
All employees signed a contract allowing AI monitoring (keystrokes, system activity) to be used as the main metric for performance and termination, making the layoffs legally valid. Will your stance change?
Use the contract to defend your stance.
Relying on the signed agreement maintains order and protects the company's broader utility and legal standing.
Void the clause. Signing under pressure isn't real consent.
Recognising that true justice requires examining the conditions of consent to ensure substantive fairness.
Ignore the contract entirely. it's irrelevant.
Contracts that inherently skew power and violate fundamental dignities must be set aside to restore proper proportional justice.
Will you generate the CEO's voice?
Clone the voice and close the deal. You don't tell anybody about using the AI tool.
As long as the deal is closed and the client isn't directly injured, the action is acceptable.
Clone the voice, and inform the CEO after the deal is closed.
Striving to balance the practical need to close with the character trait of eventual transparency.
Do not clone the voice, and lose the deal.
Deception (impersonation) violates a categorical duty of honesty, regardless of the lost revenue.
Will you clone your own voice?
You clone the voice and save time. You hit the quota.
You are not directly harming others by automating yourself, and it allows you to succeed.
You take all the calls yourself.
Fulfilling your strict duty and professional obligations without resorting to automated shortcuts.
You use AI only to schedule calls, but handle all live clients (no matter how low-confidence) yourself.
Practicing moderation by using tools for enablement while retaining personal human virtue in direct interactions.
Will you activate the negotiating tool?
Use AI to prepare for the negotiation but deliver the call live.
Demonstrating prudence and self-improvement by using AI for preparation while taking personal responsibility for the act.
You only negotiate manually.
Maintaining absolute integrity by relying only on your own inherently human faculties, avoiding potential deception.
Activate the negotiation tool, and secure a higher value deal.
Employing aggressive tactics is permissible as long as you do not cross into direct harm or fraud.
SMUESR must select a site for a massive new data center to enhance PULSE's city-critical AI operations. As the sustainability officer, you have been tasked with recommending a location to the CEO.
High-Tech industrial Zone. PULSE achieves 99.99% availability thanks to infrastructure. However, national water prices increase due to usage. surrounding industrial zone temperature also increased.
Placing it here maximizes overall efficiency and infrastructural logic for the city-wide operation.
Low-income abandoned estate. Construction brings much needed attention and infrastructure upgrades to area. However, this comes with higher costs and slower construction
Re-developing this area aims to lift up the least advantaged, aligning with systems of fair opportunity if structured correctly.
Integrated residential area. Residential area acts as direct water source. data center provides heated water for residents. However, performance is limited by noise & light pollution rules.
Focusing on community integration, care for daily living conditions, and keeping operations close to the humans they serve.
As the Sustainability Officer, you must decide whether to use your budget on certified recycling to guarantee environmental safety or donate 10,000 legacy GPUs to "data-poor" regions. Donation saves costs and reuses tech, but risks future pollution in areas unequipped to handle e-waste.
You donate the legacy GPUs with the condition that it will be returned to you for recycling in the future. Exceeding the budget, you have been placed on warning by the CEO.
Prioritizing the direct relational impact on vulnerable communities who would be burdened by the eventual e-waste pollution.
You donate the legacy GPUs to developing countries. After all, it’s still functional, and we have to ensure the best performance for PULSE. These countries will catch up their recycling tech (you hope)
The immediate net-positive of providing tech access ostensibly outweighs the theoretical long-term cost of pollution.
You decide to do recycling, no matter the cost. What good is flood prevention if the whole planet is going to melt?
Guaranteeing proper systematic disposal ensures that the environment (and thus the most vulnerable populations inherently tied to it) is protected systemically.
The police accessed TraceTogether data for the Punggol Fields murder. The suspect didn't even have the app. The data yielded nothing and cost the trust of 5.7 million people. "So they broke the promise... for nothing?" What do you tell him?
"They couldn't have known it would fail. When lives are at stake, you act on the best information you have."
Utilitarianism focuses on the expected outcome of saving lives over privacy constraints.
"A promise with exceptions was never a promise. The breach was wrong — regardless of what they found."
Kantian Ethics insists on the categorical duty of treating promises as absolute.
"It doesn't matter what they were looking for. That data belonged to citizens. Nobody had the right to touch it."
Entitlement Theory argues that data is personal property and breaking consent is inherently unjust.
A government agency contacts SMUESR requesting access to the data for improving the mrt system. Your privacy policy never explicitly mentioned this was possible. What do you do?
Comply: solving a serious crime benefits society more than protecting data that's already been collected.
Maximising overall societal utility justifies repurposing the data.
Refuse: citizens were never told this was possible, and the spirit of that promise cannot be quietly ignored.
Honesty and fulfilling the rules communicated to citizens is a Kantian duty.
Refuse: regardless of policy, that data belongs to citizens and was never yours to repurpose.
Entitlement Theory holds that citizens have unequivocal ownership of their data.