3 minutes
AI at the Intersection of Ethics and Politics
Preface: I work in AI, I use AI at work a lot, I think it’s a tool with tremendous promise, but we must be mindful of how it can be misused. With that in mind, here’s what I wrote before writing this preface.

I recently came across this article on how a private school in the US is essentially using students as test subjects for an AI education experiment. The school in question, Alpha School, has been praised by the current administration is now known to scrape the internet for course content without the consent of the content owners and are known to cause “more harm than good.” I find the ethical and political implications both appalling and interesting. On the one hand, they’re completely ignoring the ethics of using students as test subjects and failing them as educators and on the other it seems to serve certain political group’s goals in that they want to create a populace they can control, in my opinion. I bring this up, because we’re learning and talking about the ethics of our human-computer interaction designs and experiments and of the politics involved in design, and I think this is not just a great a example, but the quintessential example of things we should be looking out for in the near future with regards to how AI can amplify the values of certain people over others.
The school is deeply rooted in the homeschooling movement and it’s using AI in it’s pitch to convince parents that it can cram SAT and AP test prep into 2 hours. But the assessments it uses to gauge the student’s understanding of the material make no sense. But seems like the owners don’t care, because it serves their political interest of not having to hire teachers in order to take more of the profit from parents paying tuition. Which brings me to the parallels with AI in other industries, namely programming and software engineering. They’re being pushed as replacements, in some circles, to traditional software engineering hires, but often the code they write is average at best, often times worse and full of bugs and a nightmare to maintain, but it seems like certain people don’t care because it aids their short term profits.
AI is not ready. https://t.co/HffDM1jE6e pic.twitter.com/WfplNpalB9
— Mo (@atmoio) February 16, 2026
And that’s not even to mention how the AI industry hype is affecting other aspects of society, not in any small way with the shortages of computer parts its causing.
Now I don’t want to make this a rant about how AI is bad or anything like that, I’m actually a big proponent of the benefits of AI, but we must keep in mind the kinds of things we’re going to amplify, the values we instill in these systems might not be the values we want in a society and we need to reckon with that. But what I’m seeing is a lot of misuse and outright disregards for the ethics and certain, in my opinion evil, values being pushed and amplified extremely.
Edit 1:
As this was being written, this popped up on my socials:
Meta has patented AI that can run a dead person's account, continuing to post and chat on their behalf
— Dexerto (@Dexerto) February 16, 2026
It can message and video call by replicating a user's online behavior using their past data pic.twitter.com/2FiExpQmMl
I think it illustrates my point very clearly, we need to consider the ethical implications of our choices now more than ever and just go head long into the future making tech for tech’s sake.