A couple of points. I think some of what you have described has already transpired. ‘AI’ has not so much arrived as we have awoken to it. Social media has been a giant interaction between ‘AI’ and humans which I believe has coarsened language, although perhaps added some clarity and concision. Trump is quite a good example of the degradation of language, even though I realise his ascent was not purely due to social media.
My second point, I would not refer to AI as a tool. My interpretation of a tool is a technology that is fully understood by the person using it, and does not encourage a particular behaviour. A knife fits this definition: it can be used for cooking or murder. The choice is entirely down to the person using it. AI is understood by few people - perhaps nobody - and influences the behaviour of its users (e.g., social media).
If I prompt an artwork, I consider it vain to imagine I created that work in any meaningful way. The greatest effort came from centuries of artists whose work is being used by the system - which makes me object to the word artificial. Then there are the programmers who created the system, who have a huge influence on the output. Only after this does the prompter enter the equation, and I’d consider them of limited significance.
Have you read The Machine Stops? I came to this short story late, but it strikes me as very relevant to the scenarios you are writing about!
To your first point: I'd agree to an extent; though I'd clarify that what I was thinking about in this post was more about user-facing, explicit interaction with AI systems, rather than the pernicious, 'below-the-surface' style AI that's been influencing social media for years. However, I'm not sure that there's ultimately much use in distinguishing between the two when considering their wider effects, so thank you for pointing it out! I'll be thinking some more about this :)
To your second point: that's an interesting view; though I'm not sure I'd fully agree. To my mind a tool doesn't have to be fully understood to be used, and whether or not a tool encourages a particular behaviour is up for debate, depending on the tool. The phrase "When all you have is a hammer, everything looks like a nail" springs to mind here: the mere availability of a tool is - at least in my view - enough to influence and encourage a particular behaviour. If you've been served a meal, and cutlery is available - would you choose to eat with your hands, or the cutlery? Maybe a crude example, but I hope it illustrates my point. Would you describe a computer (in the general sense - say, a laptop) as a tool? How many people understand a laptop fully?
Where I would agree with you - and perhaps this is something I'll think and write about in the future, is that it is possible for AI to *explicitly* encourage its users to engage in a particular behaviour: which is something that I'm not sure has any real precedent, and is definitely something we will need to guard against!
I'd agree with you that prompting an AI to produce artwork isn't really 'creating art' in the sense we normally mean it - but I think debating whether or not it's 'real art' is best left to other people than myself, ultimately! I think if it brings the 'creator' / user some satisfaction, and the viewer some satisfaction - in any sense - then there is value to the process; even if we don't consider it 'art' in the way we would with traditional works. Either way, I doubt people will stop using AI to produce artwork simply because others object to it or find it shallow / meaningless. That particular debate has been had a thousand times over across decades, if not centuries. The thing I do find objectionable is the unethical usage of art to train these models without the artist's consent. It's probably a little less objectionable if the model itself is open-source and trained on publicly available artwork, but I do think artists have a real point when they object to their work being used in this way. It's not something I could profess to argue with any confidence about, however - it's more of a gut feeling than a properly thought-through opinion!
Regarding The Machine Stops - yes I have read it, and it was in fact in my mind whilst writing this post! I should probably give it another read though!
Thanks for your reply Tom! Agree with the permission-less acquisition of art by LLM. I expect without it - open source or not - LLMs would have much diminished value. The current crop of AI in that respect aren’t so artificial - they are deriving their value from human effort. Naomi Klein wrote an interesting article in the Guardian about this yesterday.
Regarding tools. I guess I’d ask what difference - if any - would you perceive in the words tool and technology? I would not perceive a black and white definition, but to my mind computers are not tools... or certainly not once one goes beyond binary simplicity. For example, if I open a music sequencer it will be set up for western scales and time signatures; it won’t have features for certain non-western concepts such as gamaka. Even as I write the word gamaka I have a red line under it, and it’s been autocorrected. I’m not criticising the creators of software, just pointing out it is not, and perhaps cannot be neutral.
The anarcho-primitavist writer John Zerzan I expect is a big influence on Paul Kingsnorth, who despite his Christian conversion, still writes very much from this narrative structures - hence why he likes Genesis. I’m not an anarchist or primitivist but I think they have good critiques. Zerzan influenced me in his distinction between tools and technology.
With regard to your food example. How do you eat chips and a burger? I can use my knife to open packs of food, trim dead plant leaves, or to create a sculpture artwork. It has an affordance: cutting, but the applications are many.
As I mentioned, I see a spectrum, not black and white categories. There’s also hacker culture etc. which complicates the picture. LLMs - in their current popular forms - strike me as having quite a simple function. Extracting value from highly skilled professionals, centralising it, and distributing it in a degraded form for huge profit.
It's interesting you wouldn't consider a computer a tool; I've never really thought of them as anything *but* a tool! But then, perhaps it would make more sense to think of a computer as a toolbox, rather than a tool in and of itself?
I'm not sure what to make of your music sequencer analogy - to my mind I would consider a sequencer a tool; even if it had shortcomings or wasn't suitable for the purpose I had in mind. If something isn't possible with your existing software, then it's surely possible that either new software could be written which does have the features you need; or the existing software could be updated to include them? The idea that these things could be an indication of a lack of neutrality is something to consider; but I'm not sure it disqualifies them from being considered tools. Does a tool have to be neutral to the user? I don't necessarily think so! Just because I can't use a particular tool does not mean that it's not a tool - it just means it's not a suitable tool for me.
I will have to look into John Zerzan - the name sounds familiar but I'm not sure I've ever read anything of his - so thanks for that, I'll certainly look into his work!
For chips and burger - I'll confess to being something of a wild-card: sometimes I use my hands, sometimes cutlery, sometimes both!
I understand your skepticism of LLMs based on the rather unethical way with which they've been produced and promoted, but I suppose I take the view that whether or not we like *how* they arrived on the scene, they are on the scene nevertheless; and we will have to adapt to their existence one way or another. I don't foresee them disappearing - only progressing and finding their niche(s). Unless we have something of a fundamental, radical shift in our thinking, economy and technological outlook, I would suggest they're here to stay for the foreseeable future whether we like them or not!
If we compulsively wish to see machine intelligence and robots in our image, what happens when the reverse occurs? As digital consumption increased on mobile phones increasingly and our habits became increasingly trained by, over the past decades one does wonder what generative AI will do to us? Given how people are using chatGPT one can already tell..... What is the new version of the infinite scroll instead of those favorite apps going to be? How will it further interrupt our humanity, socialization and capacity for human intimacy?
I hadn't even considered things like a "social feed" in the AI era to be honest, but it's definitely something I'll be thinking about now! Perhaps a more "AI" social feed would resemble something like postcards delivered from your friends. Would it even contain their words, or would AI generate a personalised note from them, to us? A nice thing to ponder at least!
In an era of technological loneliness, I guess I wonder if AI is the replacement. It won't be long before an AI assistant will be able to mimic emotional intelligence to the degree that it might be the perfect friend or even the perfect companion. Who could possibly know you as well as such a creation?
Personally I doubt I could ever feel a 'true', human-like connection with an AI; but who knows? It'd certainly be interesting to see just how far it goes. Perhaps one thing we'll discover as AI 'companions' progress is finally, what exactly it *does* mean to be human!
Just made a first quick read, will be back later when I will have more time.
I am thinking that the underlying issue is the scale issue. With IT we can move back and forth from statistics to the individual. This is something new that is hard to understand. It has political ramification that we don't get : typical example is the covid crisis. We have counts of death (precise to the unit), models and electronic passes. Everything is mixed up : you can be as precise as the individual level (each vaccine pass can be updated on your smartphone live) and at the same you have people trying to exchange with whole crowds (like E. Musk and his experience on Twitter). All this being mediated by AI... What a strange world indeed.
Thanks for reading! I agree it's a strange wold and a strange time we're living in - and I'm not entirely sure we as a species have really understood or gotten to grips with the ramifications of both the breadth and the depth with which technology has been weaved into our lives. As you say, whether at the individual level or the societal level, there is much to contend with!
Very interesting read - thank you for sharing.
A couple of points. I think some of what you have described has already transpired. ‘AI’ has not so much arrived as we have awoken to it. Social media has been a giant interaction between ‘AI’ and humans which I believe has coarsened language, although perhaps added some clarity and concision. Trump is quite a good example of the degradation of language, even though I realise his ascent was not purely due to social media.
My second point, I would not refer to AI as a tool. My interpretation of a tool is a technology that is fully understood by the person using it, and does not encourage a particular behaviour. A knife fits this definition: it can be used for cooking or murder. The choice is entirely down to the person using it. AI is understood by few people - perhaps nobody - and influences the behaviour of its users (e.g., social media).
If I prompt an artwork, I consider it vain to imagine I created that work in any meaningful way. The greatest effort came from centuries of artists whose work is being used by the system - which makes me object to the word artificial. Then there are the programmers who created the system, who have a huge influence on the output. Only after this does the prompter enter the equation, and I’d consider them of limited significance.
Have you read The Machine Stops? I came to this short story late, but it strikes me as very relevant to the scenarios you are writing about!
Thank you for reading and commenting!
To your first point: I'd agree to an extent; though I'd clarify that what I was thinking about in this post was more about user-facing, explicit interaction with AI systems, rather than the pernicious, 'below-the-surface' style AI that's been influencing social media for years. However, I'm not sure that there's ultimately much use in distinguishing between the two when considering their wider effects, so thank you for pointing it out! I'll be thinking some more about this :)
To your second point: that's an interesting view; though I'm not sure I'd fully agree. To my mind a tool doesn't have to be fully understood to be used, and whether or not a tool encourages a particular behaviour is up for debate, depending on the tool. The phrase "When all you have is a hammer, everything looks like a nail" springs to mind here: the mere availability of a tool is - at least in my view - enough to influence and encourage a particular behaviour. If you've been served a meal, and cutlery is available - would you choose to eat with your hands, or the cutlery? Maybe a crude example, but I hope it illustrates my point. Would you describe a computer (in the general sense - say, a laptop) as a tool? How many people understand a laptop fully?
Where I would agree with you - and perhaps this is something I'll think and write about in the future, is that it is possible for AI to *explicitly* encourage its users to engage in a particular behaviour: which is something that I'm not sure has any real precedent, and is definitely something we will need to guard against!
I'd agree with you that prompting an AI to produce artwork isn't really 'creating art' in the sense we normally mean it - but I think debating whether or not it's 'real art' is best left to other people than myself, ultimately! I think if it brings the 'creator' / user some satisfaction, and the viewer some satisfaction - in any sense - then there is value to the process; even if we don't consider it 'art' in the way we would with traditional works. Either way, I doubt people will stop using AI to produce artwork simply because others object to it or find it shallow / meaningless. That particular debate has been had a thousand times over across decades, if not centuries. The thing I do find objectionable is the unethical usage of art to train these models without the artist's consent. It's probably a little less objectionable if the model itself is open-source and trained on publicly available artwork, but I do think artists have a real point when they object to their work being used in this way. It's not something I could profess to argue with any confidence about, however - it's more of a gut feeling than a properly thought-through opinion!
Regarding The Machine Stops - yes I have read it, and it was in fact in my mind whilst writing this post! I should probably give it another read though!
Thanks again for your comment!
Thanks for your reply Tom! Agree with the permission-less acquisition of art by LLM. I expect without it - open source or not - LLMs would have much diminished value. The current crop of AI in that respect aren’t so artificial - they are deriving their value from human effort. Naomi Klein wrote an interesting article in the Guardian about this yesterday.
Regarding tools. I guess I’d ask what difference - if any - would you perceive in the words tool and technology? I would not perceive a black and white definition, but to my mind computers are not tools... or certainly not once one goes beyond binary simplicity. For example, if I open a music sequencer it will be set up for western scales and time signatures; it won’t have features for certain non-western concepts such as gamaka. Even as I write the word gamaka I have a red line under it, and it’s been autocorrected. I’m not criticising the creators of software, just pointing out it is not, and perhaps cannot be neutral.
The anarcho-primitavist writer John Zerzan I expect is a big influence on Paul Kingsnorth, who despite his Christian conversion, still writes very much from this narrative structures - hence why he likes Genesis. I’m not an anarchist or primitivist but I think they have good critiques. Zerzan influenced me in his distinction between tools and technology.
With regard to your food example. How do you eat chips and a burger? I can use my knife to open packs of food, trim dead plant leaves, or to create a sculpture artwork. It has an affordance: cutting, but the applications are many.
As I mentioned, I see a spectrum, not black and white categories. There’s also hacker culture etc. which complicates the picture. LLMs - in their current popular forms - strike me as having quite a simple function. Extracting value from highly skilled professionals, centralising it, and distributing it in a degraded form for huge profit.
It's interesting you wouldn't consider a computer a tool; I've never really thought of them as anything *but* a tool! But then, perhaps it would make more sense to think of a computer as a toolbox, rather than a tool in and of itself?
I'm not sure what to make of your music sequencer analogy - to my mind I would consider a sequencer a tool; even if it had shortcomings or wasn't suitable for the purpose I had in mind. If something isn't possible with your existing software, then it's surely possible that either new software could be written which does have the features you need; or the existing software could be updated to include them? The idea that these things could be an indication of a lack of neutrality is something to consider; but I'm not sure it disqualifies them from being considered tools. Does a tool have to be neutral to the user? I don't necessarily think so! Just because I can't use a particular tool does not mean that it's not a tool - it just means it's not a suitable tool for me.
I will have to look into John Zerzan - the name sounds familiar but I'm not sure I've ever read anything of his - so thanks for that, I'll certainly look into his work!
For chips and burger - I'll confess to being something of a wild-card: sometimes I use my hands, sometimes cutlery, sometimes both!
I understand your skepticism of LLMs based on the rather unethical way with which they've been produced and promoted, but I suppose I take the view that whether or not we like *how* they arrived on the scene, they are on the scene nevertheless; and we will have to adapt to their existence one way or another. I don't foresee them disappearing - only progressing and finding their niche(s). Unless we have something of a fundamental, radical shift in our thinking, economy and technological outlook, I would suggest they're here to stay for the foreseeable future whether we like them or not!
If we compulsively wish to see machine intelligence and robots in our image, what happens when the reverse occurs? As digital consumption increased on mobile phones increasingly and our habits became increasingly trained by, over the past decades one does wonder what generative AI will do to us? Given how people are using chatGPT one can already tell..... What is the new version of the infinite scroll instead of those favorite apps going to be? How will it further interrupt our humanity, socialization and capacity for human intimacy?
I hadn't even considered things like a "social feed" in the AI era to be honest, but it's definitely something I'll be thinking about now! Perhaps a more "AI" social feed would resemble something like postcards delivered from your friends. Would it even contain their words, or would AI generate a personalised note from them, to us? A nice thing to ponder at least!
In an era of technological loneliness, I guess I wonder if AI is the replacement. It won't be long before an AI assistant will be able to mimic emotional intelligence to the degree that it might be the perfect friend or even the perfect companion. Who could possibly know you as well as such a creation?
Personally I doubt I could ever feel a 'true', human-like connection with an AI; but who knows? It'd certainly be interesting to see just how far it goes. Perhaps one thing we'll discover as AI 'companions' progress is finally, what exactly it *does* mean to be human!
Just made a first quick read, will be back later when I will have more time.
I am thinking that the underlying issue is the scale issue. With IT we can move back and forth from statistics to the individual. This is something new that is hard to understand. It has political ramification that we don't get : typical example is the covid crisis. We have counts of death (precise to the unit), models and electronic passes. Everything is mixed up : you can be as precise as the individual level (each vaccine pass can be updated on your smartphone live) and at the same you have people trying to exchange with whole crowds (like E. Musk and his experience on Twitter). All this being mediated by AI... What a strange world indeed.
Thanks for reading! I agree it's a strange wold and a strange time we're living in - and I'm not entirely sure we as a species have really understood or gotten to grips with the ramifications of both the breadth and the depth with which technology has been weaved into our lives. As you say, whether at the individual level or the societal level, there is much to contend with!