Science fiction often features characters who are machines. An old one, and the first that comes to mind, is Heinlein’s The Moon Is a Harsh Mistress. Just sayin'
Yeah, though one thought on that had never occurred to me until now - those characters are always seem to end up developing somewhat distinct "personalities," and are typically individuals in their own way. Like C3PO is a machine, but with some weird quirks. There's not another identical C3PO.
Are there movie characters that are machines that are replicated, where everyone is "friends" with the same one? "Her" is the only one that comes to mind, and (iirc) when the Joaquin Phoenix character finds out about it, he kinda loses it.
I don’t know about any movies, but I remember the TV show Knight Rider with the AI-enabled car called KITT. We’ve been primed by media to think of AI as being our friend. Would that it will be so.
This was an interesting clip to me, where Sam Altman kinda sorta ever so slightly says the same thing, trying differentiate between friend and companion. That particular phrasing feels like a distinction without a difference though: https://www.youtube.com/shorts/LF7NlWEVvjk
I loved how he talked about "edge" people without really defining them. Are "edge" people those who cannot differentiate between humans and AI? That's what it sounded like to me.
Fair, yeah, I partly meant that with this part: "The bot runs as a secure, self-contained program on Pat’s phone, and can never be updated or manipulated by its maker."
But that does add an additional wrinkle, perhaps - so what if they did? If the other things are true - the bot is designed to be truly like a friend; they can't fiddle with it after you start using it - does it matter if they make money from you using it? The problems with that are typically about incentives, but if you assume their incentives aren't corrupted by it, what's wrong with them making money? (It feels like a mini version of the whole thought experiment actually, where we assume it's bad because of the effects, but if we get rid of the effects, it still feels kinda bad.)
I think if they're only interacting over chat, it's a hobby, not a friend. There are no shortage of people who isolate and spend all their time on strange hobbies. Sometimes they even do it to an unhealthy degree.
I mentioned the profit motive because I think that's the only reason these tools get built. I suppose there could be open source models that catch on, but I'm not sure anyone will build a really robust app for the phone without expecting to make a profit.
On the profit part, I agree that they'd only do it to make money, but I don't think that means you'd always make it to be super engaging vs more wholesome, for lack of a better word. This product is creepy, but it's also fairly close to the thing in the post - it's a one-time purchase, it runs locally, they can't update it, etc. https://friend.com/
And on interacting with it just over chat, yeah, I personally mostly agree, though there apparently plenty of people who seem to think differently.
It's funny that the instinctive reaction to a sycophantic AI is extremely negative, but people are very happy to say owning a dog is great because the love is unconditional.
Pets also have needs that must be taken care of, so the love relationship goes both ways. And who doesn't love giving their pet a treat, or rubbing their belly?
AI does not have any needs (and never really will, even if we create virtual needs for it). It's just a one-way thing.
The solution to AI psychosis is simple: we just need to come back to reality by taking care of the actual needs of real people (like some kind of detox, basically). But to many people, getting another hit of unreality is much more attractive.
Yeah pets have their own personalities, preferences, and needs. That’s what bothers me about AI friends/partners. I actually don’t think it’s unethical to have an AI friend or partner and I can understand why a lot of people would find that really valuable actually, but I worry that it gets people in the habit of having relationships in which the other party has no needs or boundaries of their own. But I’m starting to wonder if that’s just pearl-clutching on my part.
Something that makes AI most unappealing to me as a friend/partner is that it has no life, no experiences. My favorite thing to talk about with real life friends is the stuff going on in their real lives, or past experiences they remember. I learn so much from that, and get to watch someone else learn and grow over time too. An AI doesn’t have any of that, and if it makes up a fake backstory that feels even worse to me.
And yeah, making AI more "real" (with its own experiences, etc.) is definitely not the answer, because it is real people we should be focusing on. As an extreme example, putting AI characters on real welfare or medicare would be outrageous.
That's an interesting angle - that they're all mirror. Or that they're everyone's experiences in a weird averaged way, where they don't actually ever say anything about themselves.
Maybe the right analogy is that being friends with chatgpt is like being friends with a therapist - they can approximate a whole buch of other people's experiences, but they can never tell you anything about their own. There is no specific character to them, or experience that they has an special weight. And that's how they can't be human: There is no perspective or main character energy to anything that they've "seen."
That's another interesting angle I suppose. What if instead of imagining these things as human friends, they were like...pets that spoke to us?
Or, maybe a more interesting question - if we built a way for dogs to talk (https://www.youtube.com/watch?v=LZ0VJClIlRI) and all they did was tell us how great we are, would we say that's dangerous and terrible?
I recently saw a product that was geolocated AI stuff, and I initially thought it was chatbots that you could only talk to in certain places, which seemed like you could do some clever stuff with? But alas, it wasn't that, and it was just some marketing tool.
Sorta, I have no idea what the actual thing was though. But yeah, the idea of you have to go somewhere to interact with this thing, like some live action RPG type game where there's a witch in a house selling you potions or whatever, seemed like a thing that someone would make.
1) Yes, still wrong - the dead person, or their next of kind, did not consent, and even if they will never know, it's about respecting the wishes of the dead. That said, it's a misdemeanor in terms of moral weight, despite being much squickier.
2) Ew, but no, consenting adults avoiding the #1 problem there of 'Baby could be born badly due to inbreeding'. Not comfortable with it, can't condemn it. Probably best avoided though because of potential fallout to one's lives otherwise, but if kept secret, its theirs.
And then:
3) Yes, ban Chat GPT, because its nowhere as good as your hypothetical bot, is clearly causing people to have psychotic breaks or worse, and is rapidly dumbing down all of humanity by teaching them to never work for anything.
Frankly, Chat GPT is perhaps the single most disgusting technology we have ever unleashed and should be burnt to a crisp.
That said, a true AI that is actually aware forming close relationships with people is 100% fine, that's now back to passing the Harkness Test of Fuckability.
On ChatGPT, is it 1) ChatGPT specifically, 2) OpenAI's GPT models, or 3) LLMs more broadly? I have some thoughts here, but curious what makes ChatGPT so bad whereas other AIs could be fine.
Unfunny and unsurprising that Meta wants to sell me a Russian girl or step mom — two of the most popular categories in the adult entertainment industry. It’ll probably cost me a lifetime stream of all of my most personal information, tens of thousands of my photos and videos, consumption patterns, and financial portfolio.
Now spin this around to a different model. How would it be built? How would it get scaled?
I don't think I'm so cynical to say it could never happen? If bots like those start to show up (and like the Grok Ani thing or whatever), I could see someone making a genuine effort to make something that was better.
Like, I do think that there are a lot of people in tech are neither evil nor mercenaries for money, and want to do things that are actually good. The hard part seems to be *staying that way.* So how do you build bots that don't break bad, because they get big, and there's money to be made by doing all this stuff? That seems like the much harder thing.
Some time ago, physicists within the orbit of MIT realized that they had been producing physicists who were living entirely in the dependent “reality” they had set up in their minds. They had the same problem as do AI units who cannot experience the outside world.
Maybe in the way of the Chinese SF novel called 3 Body Problem, two physicists named Jack and Jill who were about to be married were informed that “physics is dead.” Evidently they were not ever informed that a physics *theory* is never “alive,” i.e., considered true. Now they have learned that the king is dead, so instead of saying, “The king is dead, long live the king,” they decided to climb to the cliff that leans out over the Pacific on the east side of Taiwan, and jump together to their doom. How romantic. Jill wants to hold Jack’s hand as they go down, but Jack says he wants to savor every scintilla of his last moment alive, and that if they were to hold hands that would double the weight and they would fall twice as fast.
Physics says, and places with huge vacuum chambers have demonstrated it for all eyes to see, that whether it’s a BB or a cannon ball they will reach the ground at exactly the same time. But that’s beyond these modern-day physics PHDs because they are living in a world generated out ot statements they have had drilled into them one way or another.
I don’t get what you call “the physics underlying” AI. In Stanford physics for majors class, they had their doctoral-track students teach physics lab who had never had their own labs. They knew some equations, but sometimes they were as far off as that couple who wanted to slow their last moments. AI makes the worst mistakes that I’ve seen when arguing things through in words produces crazy results.
For the “Ban ChatGP:T” part,. it would be highly useful to study The Social Construction of Reality. by Peter L. Berger and Thomas Luckmann
Ah, on the "physics underlying AI" bit, I meant it metaphorically about AI businesses. As in, what sort of (metaphorical) gravity holds those startups down? For companies that aren't venture backed, most of the time, they can only spend as much as they have, or they have to sell what they're selling for a profit, or simple things like that. For venture backed companies, you can defy those laws for a while, because VCs will keep giving you money. You can spend a lot more than you can make; you can sell products at some sort of loss, and so on. But you can only go so far with it, where you can't (as people said) sell $20 bills for $10. Selling at a loss isn't necessarily a problem, but there needs to be obvious ways for you to start selling things profitably.
A lot of AI companies are doing things that now seem to even defy those VC physics. They are lighting lots of money on fire, and selling products that there's no obvious way to sell for a profit. And so, people are (reasonably) saying that these businesses are going to inevitably collapse.
And I'm not so sure that's true? At some point, sure, eventually, a company can't do this literally forever. But there is so much demand for these AI tools, and it's all so new, that nobody really knows what anything should cost. When people spend $200 on Claude Code, for example, nobody quite knows what they're buying or how much they should be able to do with it. It's pricing via a sort of dead reckoning, where all the priors about how much things should cost and what they should do are just based on the first couple versions of how much it originally cost and what that original version did. And so, (very metaphorically), it seems like a market where nobody's really run any experiments yet or knows how any of it quite works.
> "Would that form of AI companionship still be wrong? [...] is there a good answer?"
Seriously? Using machines as a replacement for people is a recipe for disaster for many reasons. An important one is that it reduces social cohesion (which is already hanging by a thread). But an even more important one is that it reduces the overall value of people, which is a dangerous proposition (because when humans do not value other people, they are much more willing to do terrible things to them. Just take Nazi Germany, Soviet Russia, or Mao's China as examples, where the motherland became more important than its citizens). The more humanity leans towards worshipping machines (and I'm not using that word lightly, but rather as opposed to leveraging technology for the greater good), the easier it will become to justify getting rid of the "undesirables" (*whoever that happens to be*), especially once we cross past the tipping point.
But humanity seems intent on going down that road... And the more we invest into this, the more we are divesting away from people (which is exactly the road so much capital seems to be going down right now).
So I don't disagree with that, and would think (without a very precise theory about why) that a society where everyone is friends with robots is a society that has some real problems.
But does that make it wrong, ethically, for an individual to become friends with one? Suppose Pat built the robot for themselves, and nobody else could have one. Would it still be bad? Like, you could certainly apply the same critique to the actual scenarios in the study, that if everyone did those things, we'd have a whole lot of problems. But if we assume that doesn't happen, and it's just a few people who make that choice, are they doing something bad?
(Again, my answer very much wants to be yes, but I struggle to come up with a good explanation.)
Yeah, it's pretty hard to explain concretely. It's a whole bunch of hypothetical what-ifs.
But ultimately it's all about the butterfly effect. Our micro/personal choices do have an impact on the macro/wider society, but we'll almost never know exactly what they are. And also on ourselves, because we'll always know what we did or didn't do (even if no one else does), and that can influence what we do in the future. We can't keep any secrets from ourselves.
People not saying/doing something when they should have, saying/doing something when they shouldn't have, or saying/doing the wrong thing. How many people need to speak up in order to prevent something terrible? Does it make any difference if I say/do something? I'm just one person. And it's so easy to look the other way, and to convince ourselves that it doesn't make any difference. That's exactly what many Germans thought when they closed the curtains.
But isolating from the entire thing is certainly not the best thing to do. We don't live in a bubble disconnected from everyone else. We have to accept that our choices & actions matter -- but it's very easy to justify them without thinking simply by imagining a better future, and to condemn those of others by imagining a worse future (instead of looking at reality/facts/data/etc.).
In the end it's almost a matter of faith. What else would make us take & endure the hard road rather than the easy one?
I suspect this is something that people much smarter and more educated than me could talk about, where I'm sure there's some entire side of moral philosophy that has thought a lot about this. And I'd imagine they reject the entire premise of the questions I was asking, that these things can happen in isolation, and to make the assumption that I made - "that you're the only one doing it and nobody else is changed by it" - is assuming away the entire problem of ethical reasoning.
Which would seem sort of reasonable, and strikes me as perhaps a better way to actually live in the world. So on that, yeah, I think you're very right.
But on the other hand, I'm a sucker for kind of dumb and indulgent thought experiments.
Love is a lot more important than scholarship (and much more accessible, because it doesn't take a PhD to love others). Philosophers have very little influence on the lives of everyday people.
And there's nothing wrong with a little bit of indulgence every now & then. Otherwise, what's the point of anything? ^_^
I am reading this in a coffee shop, while waiting for Sora to finish making me an image of Vanilla Ice eating a piece of pizza while standing on a mountain top full of late summer wildflowers. Sooo... there's that.
Science fiction often features characters who are machines. An old one, and the first that comes to mind, is Heinlein’s The Moon Is a Harsh Mistress. Just sayin'
Yeah, though one thought on that had never occurred to me until now - those characters are always seem to end up developing somewhat distinct "personalities," and are typically individuals in their own way. Like C3PO is a machine, but with some weird quirks. There's not another identical C3PO.
Are there movie characters that are machines that are replicated, where everyone is "friends" with the same one? "Her" is the only one that comes to mind, and (iirc) when the Joaquin Phoenix character finds out about it, he kinda loses it.
I don’t know about any movies, but I remember the TV show Knight Rider with the AI-enabled car called KITT. We’ve been primed by media to think of AI as being our friend. Would that it will be so.
This was an interesting clip to me, where Sam Altman kinda sorta ever so slightly says the same thing, trying differentiate between friend and companion. That particular phrasing feels like a distinction without a difference though: https://www.youtube.com/shorts/LF7NlWEVvjk
I loved how he talked about "edge" people without really defining them. Are "edge" people those who cannot differentiate between humans and AI? That's what it sounded like to me.
True, it's easy for things not to be problems if you tautologically describe things that are problems as edge cases.
I think a key thing left out of your thought experiment is “and the provider of that technology makes money based on how often you engaged.”
I find it hard to imagine I would ever consider a chatbot to be a “best friend” but maybe that’s because I’m old and have real friends.
Fair, yeah, I partly meant that with this part: "The bot runs as a secure, self-contained program on Pat’s phone, and can never be updated or manipulated by its maker."
But that does add an additional wrinkle, perhaps - so what if they did? If the other things are true - the bot is designed to be truly like a friend; they can't fiddle with it after you start using it - does it matter if they make money from you using it? The problems with that are typically about incentives, but if you assume their incentives aren't corrupted by it, what's wrong with them making money? (It feels like a mini version of the whole thought experiment actually, where we assume it's bad because of the effects, but if we get rid of the effects, it still feels kinda bad.)
And as for best friends, there are many such cases: https://www.google.com/search?q=chatgpt+best+friend
I think if they're only interacting over chat, it's a hobby, not a friend. There are no shortage of people who isolate and spend all their time on strange hobbies. Sometimes they even do it to an unhealthy degree.
I mentioned the profit motive because I think that's the only reason these tools get built. I suppose there could be open source models that catch on, but I'm not sure anyone will build a really robust app for the phone without expecting to make a profit.
On the profit part, I agree that they'd only do it to make money, but I don't think that means you'd always make it to be super engaging vs more wholesome, for lack of a better word. This product is creepy, but it's also fairly close to the thing in the post - it's a one-time purchase, it runs locally, they can't update it, etc. https://friend.com/
And on interacting with it just over chat, yeah, I personally mostly agree, though there apparently plenty of people who seem to think differently.
It's funny that the instinctive reaction to a sycophantic AI is extremely negative, but people are very happy to say owning a dog is great because the love is unconditional.
Pets also have needs that must be taken care of, so the love relationship goes both ways. And who doesn't love giving their pet a treat, or rubbing their belly?
AI does not have any needs (and never really will, even if we create virtual needs for it). It's just a one-way thing.
The solution to AI psychosis is simple: we just need to come back to reality by taking care of the actual needs of real people (like some kind of detox, basically). But to many people, getting another hit of unreality is much more attractive.
A bit like a tamagotchi I guess
Or those parents who let their real child die while raising a virtual child.
https://www.theguardian.com/world/2010/mar/05/korean-girl-starved-online-game
I guess it's good to know we've been eaten by addicting computers for decades?
Haha yeah, it's nothing new. But it does seem to be getting worse and more widespread.
Yeah pets have their own personalities, preferences, and needs. That’s what bothers me about AI friends/partners. I actually don’t think it’s unethical to have an AI friend or partner and I can understand why a lot of people would find that really valuable actually, but I worry that it gets people in the habit of having relationships in which the other party has no needs or boundaries of their own. But I’m starting to wonder if that’s just pearl-clutching on my part.
Something that makes AI most unappealing to me as a friend/partner is that it has no life, no experiences. My favorite thing to talk about with real life friends is the stuff going on in their real lives, or past experiences they remember. I learn so much from that, and get to watch someone else learn and grow over time too. An AI doesn’t have any of that, and if it makes up a fake backstory that feels even worse to me.
Well wait maybe in the future we’ll just implant real people’s memories into AI and the problem will be solved: https://open.substack.com/pub/clairelevans/p/eating-the-engram
You just nailed reality vs unreality. 👌
And yeah, making AI more "real" (with its own experiences, etc.) is definitely not the answer, because it is real people we should be focusing on. As an extreme example, putting AI characters on real welfare or medicare would be outrageous.
That's an interesting angle - that they're all mirror. Or that they're everyone's experiences in a weird averaged way, where they don't actually ever say anything about themselves.
Maybe the right analogy is that being friends with chatgpt is like being friends with a therapist - they can approximate a whole buch of other people's experiences, but they can never tell you anything about their own. There is no specific character to them, or experience that they has an special weight. And that's how they can't be human: There is no perspective or main character energy to anything that they've "seen."
Good point, but I love my dog in part because he likes to crawl all over me and try to lick my face. Chatbots don’t do that.
That's another interesting angle I suppose. What if instead of imagining these things as human friends, they were like...pets that spoke to us?
Or, maybe a more interesting question - if we built a way for dogs to talk (https://www.youtube.com/watch?v=LZ0VJClIlRI) and all they did was tell us how great we are, would we say that's dangerous and terrible?
I would say yes, that would be terrible.
Probably balanced by dogs getting you out of the house, physically active etc. Perhaps chatbots need more of Pokémon Go features.
I recently saw a product that was geolocated AI stuff, and I initially thought it was chatbots that you could only talk to in certain places, which seemed like you could do some clever stuff with? But alas, it wasn't that, and it was just some marketing tool.
Bring back cybercafes you cowards!!
Like this: https://www.hlp.city/en-gb ?
Geolocated chatbots does sound quite fun though.
Sorta, I have no idea what the actual thing was though. But yeah, the idea of you have to go somewhere to interact with this thing, like some live action RPG type game where there's a witch in a house selling you potions or whatever, seemed like a thing that someone would make.
The Moral Reasoning one is fun
1) Yes, still wrong - the dead person, or their next of kind, did not consent, and even if they will never know, it's about respecting the wishes of the dead. That said, it's a misdemeanor in terms of moral weight, despite being much squickier.
2) Ew, but no, consenting adults avoiding the #1 problem there of 'Baby could be born badly due to inbreeding'. Not comfortable with it, can't condemn it. Probably best avoided though because of potential fallout to one's lives otherwise, but if kept secret, its theirs.
And then:
3) Yes, ban Chat GPT, because its nowhere as good as your hypothetical bot, is clearly causing people to have psychotic breaks or worse, and is rapidly dumbing down all of humanity by teaching them to never work for anything.
Frankly, Chat GPT is perhaps the single most disgusting technology we have ever unleashed and should be burnt to a crisp.
That said, a true AI that is actually aware forming close relationships with people is 100% fine, that's now back to passing the Harkness Test of Fuckability.
On ChatGPT, is it 1) ChatGPT specifically, 2) OpenAI's GPT models, or 3) LLMs more broadly? I have some thoughts here, but curious what makes ChatGPT so bad whereas other AIs could be fine.
Unfunny and unsurprising that Meta wants to sell me a Russian girl or step mom — two of the most popular categories in the adult entertainment industry. It’ll probably cost me a lifetime stream of all of my most personal information, tens of thousands of my photos and videos, consumption patterns, and financial portfolio.
Now spin this around to a different model. How would it be built? How would it get scaled?
I don't think I'm so cynical to say it could never happen? If bots like those start to show up (and like the Grok Ani thing or whatever), I could see someone making a genuine effort to make something that was better.
Like, I do think that there are a lot of people in tech are neither evil nor mercenaries for money, and want to do things that are actually good. The hard part seems to be *staying that way.* So how do you build bots that don't break bad, because they get big, and there's money to be made by doing all this stuff? That seems like the much harder thing.
Some time ago, physicists within the orbit of MIT realized that they had been producing physicists who were living entirely in the dependent “reality” they had set up in their minds. They had the same problem as do AI units who cannot experience the outside world.
Maybe in the way of the Chinese SF novel called 3 Body Problem, two physicists named Jack and Jill who were about to be married were informed that “physics is dead.” Evidently they were not ever informed that a physics *theory* is never “alive,” i.e., considered true. Now they have learned that the king is dead, so instead of saying, “The king is dead, long live the king,” they decided to climb to the cliff that leans out over the Pacific on the east side of Taiwan, and jump together to their doom. How romantic. Jill wants to hold Jack’s hand as they go down, but Jack says he wants to savor every scintilla of his last moment alive, and that if they were to hold hands that would double the weight and they would fall twice as fast.
Physics says, and places with huge vacuum chambers have demonstrated it for all eyes to see, that whether it’s a BB or a cannon ball they will reach the ground at exactly the same time. But that’s beyond these modern-day physics PHDs because they are living in a world generated out ot statements they have had drilled into them one way or another.
I don’t get what you call “the physics underlying” AI. In Stanford physics for majors class, they had their doctoral-track students teach physics lab who had never had their own labs. They knew some equations, but sometimes they were as far off as that couple who wanted to slow their last moments. AI makes the worst mistakes that I’ve seen when arguing things through in words produces crazy results.
For the “Ban ChatGP:T” part,. it would be highly useful to study The Social Construction of Reality. by Peter L. Berger and Thomas Luckmann
Ah, on the "physics underlying AI" bit, I meant it metaphorically about AI businesses. As in, what sort of (metaphorical) gravity holds those startups down? For companies that aren't venture backed, most of the time, they can only spend as much as they have, or they have to sell what they're selling for a profit, or simple things like that. For venture backed companies, you can defy those laws for a while, because VCs will keep giving you money. You can spend a lot more than you can make; you can sell products at some sort of loss, and so on. But you can only go so far with it, where you can't (as people said) sell $20 bills for $10. Selling at a loss isn't necessarily a problem, but there needs to be obvious ways for you to start selling things profitably.
A lot of AI companies are doing things that now seem to even defy those VC physics. They are lighting lots of money on fire, and selling products that there's no obvious way to sell for a profit. And so, people are (reasonably) saying that these businesses are going to inevitably collapse.
And I'm not so sure that's true? At some point, sure, eventually, a company can't do this literally forever. But there is so much demand for these AI tools, and it's all so new, that nobody really knows what anything should cost. When people spend $200 on Claude Code, for example, nobody quite knows what they're buying or how much they should be able to do with it. It's pricing via a sort of dead reckoning, where all the priors about how much things should cost and what they should do are just based on the first couple versions of how much it originally cost and what that original version did. And so, (very metaphorically), it seems like a market where nobody's really run any experiments yet or knows how any of it quite works.
Regarding Semantic layer tools YAYAML
final markdown language (fml)
> "Would that form of AI companionship still be wrong? [...] is there a good answer?"
Seriously? Using machines as a replacement for people is a recipe for disaster for many reasons. An important one is that it reduces social cohesion (which is already hanging by a thread). But an even more important one is that it reduces the overall value of people, which is a dangerous proposition (because when humans do not value other people, they are much more willing to do terrible things to them. Just take Nazi Germany, Soviet Russia, or Mao's China as examples, where the motherland became more important than its citizens). The more humanity leans towards worshipping machines (and I'm not using that word lightly, but rather as opposed to leveraging technology for the greater good), the easier it will become to justify getting rid of the "undesirables" (*whoever that happens to be*), especially once we cross past the tipping point.
But humanity seems intent on going down that road... And the more we invest into this, the more we are divesting away from people (which is exactly the road so much capital seems to be going down right now).
So I don't disagree with that, and would think (without a very precise theory about why) that a society where everyone is friends with robots is a society that has some real problems.
But does that make it wrong, ethically, for an individual to become friends with one? Suppose Pat built the robot for themselves, and nobody else could have one. Would it still be bad? Like, you could certainly apply the same critique to the actual scenarios in the study, that if everyone did those things, we'd have a whole lot of problems. But if we assume that doesn't happen, and it's just a few people who make that choice, are they doing something bad?
(Again, my answer very much wants to be yes, but I struggle to come up with a good explanation.)
Yeah, it's pretty hard to explain concretely. It's a whole bunch of hypothetical what-ifs.
But ultimately it's all about the butterfly effect. Our micro/personal choices do have an impact on the macro/wider society, but we'll almost never know exactly what they are. And also on ourselves, because we'll always know what we did or didn't do (even if no one else does), and that can influence what we do in the future. We can't keep any secrets from ourselves.
People not saying/doing something when they should have, saying/doing something when they shouldn't have, or saying/doing the wrong thing. How many people need to speak up in order to prevent something terrible? Does it make any difference if I say/do something? I'm just one person. And it's so easy to look the other way, and to convince ourselves that it doesn't make any difference. That's exactly what many Germans thought when they closed the curtains.
But isolating from the entire thing is certainly not the best thing to do. We don't live in a bubble disconnected from everyone else. We have to accept that our choices & actions matter -- but it's very easy to justify them without thinking simply by imagining a better future, and to condemn those of others by imagining a worse future (instead of looking at reality/facts/data/etc.).
In the end it's almost a matter of faith. What else would make us take & endure the hard road rather than the easy one?
I suspect this is something that people much smarter and more educated than me could talk about, where I'm sure there's some entire side of moral philosophy that has thought a lot about this. And I'd imagine they reject the entire premise of the questions I was asking, that these things can happen in isolation, and to make the assumption that I made - "that you're the only one doing it and nobody else is changed by it" - is assuming away the entire problem of ethical reasoning.
Which would seem sort of reasonable, and strikes me as perhaps a better way to actually live in the world. So on that, yeah, I think you're very right.
But on the other hand, I'm a sucker for kind of dumb and indulgent thought experiments.
Love is a lot more important than scholarship (and much more accessible, because it doesn't take a PhD to love others). Philosophers have very little influence on the lives of everyday people.
And there's nothing wrong with a little bit of indulgence every now & then. Otherwise, what's the point of anything? ^_^
I am reading this in a coffee shop, while waiting for Sora to finish making me an image of Vanilla Ice eating a piece of pizza while standing on a mountain top full of late summer wildflowers. Sooo... there's that.
Long live Mode!!
Share it with the class please