Welcome New Readers!
Before I continue with today’s post, I’d like to welcome the influx of new subscribers I’ve had over the past few weeks. It’s been around a month since I had my 100th subscriber, and since then I’ve been averaging a handful of new readers every day. Thank you to everyone who’s taken the time to subscribe, share, and engage with my work here - it means a lot! I hope you all stick around and that you find something here that entertains, educates, or gives you something to think about.
In this 3-part series, I’ll lay out some of my thoughts about where we are with modern technology, and some of the problems I hope we can address.
Introduction: The Digital Divide
I first heard the term ‘Digital Divide’ many years ago, during a conversation about a problem facing the UK workforce: how to help secure jobs and improve the skills of workers during a period of rapid change in the world’s relationship with technology.
Back then, 20-something years ago, I understood the term to refer to the gap between an older generation which was generally unfamiliar with digital technology, computers, and the internet, and a younger generation that was growing up with the skills of the future (now, somehow, the present!) being fostered within them from an early age. As a teenager, I could understand in an abstract sense that ‘making people use computers’ - where previously there had been none - would lead to some people struggling; but I had no direct experience to truly understand it. The older people in my life may not have been particularly skilled with technology, certainly, but I didn’t get the impression that they cared all that much anyway. I wasn’t exposed to people trying - and struggling - to use computers: people seemed to manage as much as they felt they needed to, and that seemed about as balanced a situation as we could hope for. The ‘Digital Divide’ remained, for me, something that other people needed to worry about, and I got on with my life, blissfully ignorant of the impact - good and bad - of modern technology outside of my own bubble.
When I first became aware of the Digital Divide, the experience for most people - at least those who were lucky enough to have access to modern technology in the first place - was fairly uniform: If you ever used a computer, it was probably a Windows PC at work or school, and some families had a single PC at home, or, if so inclined, a games console. Internet access was far from ubiquitous, and those who could get online were mostly on a dial-up connection - preventing the use of the telephone whilst someone was online, and vice-versa. Software came on floppy disks - later CD-ROM - and though it was technically possible to download programs, doing so wasn’t a particularly streamlined or straightforward experience, and so many people never bothered.
Contrasted with today’s consumer landscape, the late 80s / early 90s felt like another universe altogether. Today, we carry computers in our pockets that outstrip even the most powerful home PCs of my childhood, and the only time we’re ever offline is when we’re out in the sticks (and even that is becoming less of an issue), or deliberately choosing to look up from our screens in an attempt to bring some reprieve from the onslaught of information that characterises digital life in the 21st century. We send and receive more data on a monthly, weekly, or sometimes even daily basis than home computers of the 90s could comfortably store on a hard drive, and we perform tasks with a few swipes of our thumb or clicks of a mouse that people just a decade or two ago would have thought unimaginable.
The people needing help, back when I first heard the term, have by now likely left the workforce, and though we probably all have memories of helping an older relative or friend navigate some modern tech, it’s easy to think of the Digital Divide as a ‘problem of the past’. Modern tech is now all around us - at work, home, and school - and for those of us who grew up with it, we’ve watched those who may have struggled at work retire, and, outside of the worst cases, find some kind of harmony with the tech which at one point may have caused distress.
Today, the term ‘Digital Divide’ has become diluted into one of those somewhat ambiguous catch-all terms, that hints at a problem but doesn’t provide the context necessary to understand the present - and future - dilemma we are currently facing. Broadly speaking, it’s possible to point at any particular area within the technological landscape and identify those who ‘can’ and those who ‘cannot’. Those who are able to benefit, and those who are not. It’s easy to reel off problems with access, experience, and accessibility - and it’s not such a great problem, when those issues are identified, to imagine and implement solutions to them. Modern technology can be forgiven, I think, for not being perfectly suited for every person in every situation - it’s unrealistic to imagine that every potential problem can be anticipated in advance.
The problems that the term ‘Digital Divide’ was originally conceived to describe still exist today, though they may perhaps manifest themselves in different forms. Socio-economic, cultural, accessibility and educational factors which limit or prevent people from taking advantage of modern technology are all important problems which we should strive to address - but even for those for whom access is not the issue, there remain problems which are often more abstract and difficult to define.
I believe that the sheer pace of progress and the current ubiquity of modern technology have blinded us to a set of problems which I fear, taken together, risk a deep fragmentation of our relationship with technology and prevent us from taking full advantage of all that’s on offer.
The first of these problems - and the subject of this first part of Shattered Silicon - relates to how we design the modern systems and software that permeate every aspect of our lives and transform our relationship with services and institutions.
The Tyranny of Simplicity
There’s an often-paraphrased quote from Wind, Sand and Stars by Antoine de Saint-Exupéry - that I’m personally very fond of - which goes:
In anything at all, perfection is finally attained not when there is no longer anything to add, but when there is no longer anything to take away…
The application of this philosophy in the technological world is very common - and plain to see. Companies spend vast sums of money and time figuring out what not to build, and, speaking from direct experience, there’s often a great deal more satisfaction to be found in deleting code, rather than writing it. Stripping a system down to its most simple, elegant form is almost an art - but what we often fail to understand is that humans are not simple and - in my case at least - often not very elegant either. In my post The Warehouse Of Horror, I mentioned the problem of identifying edge cases in software design (circumstances which fall outside of the expected norms) and the problems which can be caused when developers fail to take them into account.
Our lives are full of edge cases. No two people share exactly the same circumstances, but modern digital infrastructure is often built not to accommodate our different situations, capabilities, desires or backgrounds, but instead to standardise and regulate the more flexible analogue systems they are ultimately designed to replace. Whereas once you could walk into a building and speak to another human being, who would (at least, in theory) respond to your specific circumstances and tailor their services to your needs, it is often the case that the human face of organisations, institutions or even the government is reduced to that of an interactive sign-post. They may point you in the right direction, but ultimately you will be routed through a series of unyielding digital checkpoints that will no more bend to your will than they might comprehend your common-sense protestations that ‘the system’ may well be ‘more efficient’, but is still incomprehensibly stupid.
Aside from the academic and socio-economic analyses of the Digital Divide, we should be mindful of the risks of building modern infrastructure and services that are so tightly regulated and standardised that their ‘perfect customers’ are the architects themselves: people who don’t need to use the systems they build but who nevertheless refer to a dashboard of statistics and focus groups that confirm the perfection of their designs.
Whether it’s a private company or a public service, the move to digitise services, simplify and standardise processes and ‘improve efficiency’ can - and often does - result in new systems and processes which eschew flexibility and ‘the personal touch’ in favour of a more rigid and impersonal approach. Staff trained on these systems are usually unable to step beyond the constraints imposed upon them, leading to brittle and often heated interactions with customers or service users who might be frustrated by the obstinate refusal of ‘the system’ to adapt to their needs.
This way lies madness. We cannot - and should not - expect humans to confine themselves to the constraints of the ‘perfect user’. The systems and services we build using modern technology should, instead, acknowledge that every user demands a unique approach - and that in the confrontation between man and machine, it is the machine which should adapt to our needs, rather than us contorting ourselves into the shape most easily digestible by the system.
Embracing Complexity
I believe there’s cause for hope here. Much of the frustration and inflexibility we may often find in the design of the modern digital world is, I suspect, a result of an attempt to convert complex human interactions into processes which can be navigated through interfaces that are, despite decades of research and improvement, fundamentally incapable of anything beyond a relatively basic set of inputs and outputs. It is simply not possible to convey human experience through a mouse click, nor to appeal to the compassion of a digital application form. The simplicity of the interface has no capacity for the complexity of our thoughts or needs. Though we should be careful not to assume that the solution to problems caused by technology is more technology, it’s worth considering that we currently stand on the precipice of an enormous shift in the way we interact with the digital realm.
In What The Vision Pro is Actually For, I argued that despite some hilariously dystopian overtones, the concept of ‘Spatial Computing’ may open up space for much more intuitive and flexible interactions with modern systems. Imagine a world where systems could respond to the subtle social cues we give each other every day, react to gesticulations and vocal tone, and where your hands are free to interact with a digital system in ways similar to those you might use for any other real-world process. It’s something of a sci-fi trope that the future of humanity is dispassionate, functional and practical - closely mirroring the cold logic of the machine. What if that weren’t the case? Instead - what if the future is one in which you’re free to interact with the digital world with all the emotion and complexity that is our gift as humans?
Developments in AI will undoubtedly help here too - despite the worst-case-scenario fears. It’s already impressive how freely one can talk to an AI system like ChatGPT, and how it interprets ‘ordinary language’ to produce results which make sense. Obviously, the problem of ‘understanding’ is still an enormous barrier - and it’s by no means certain that it’s one we’ll ever be able to overcome. However, we may still see improvements to the current status quo even if true comprehension from the machine eludes us.
To illustrate more clearly how I see a potential future - take a look at The Open Interpreter Project. Open Interpreter (OI) is software which allows you to ask something of the device, and, using the power of LLMs, watch as your machine springs to life, designing, building, and executing the right software to achieve the task at hand. Arthur C. Clarke once famously wrote in Profiles of the Future: An Inquiry into the Limits of the Possible that
Any sufficiently advanced technology is indistinguishable from magic.
The Open Interpreter Project - and systems like it - are, I believe, the closest thing to magic that we currently have. The idea that one day soon we will be able to speak to a machine the way we would any other human, ask it to perform some task the way you might a colleague or a friend, and have it design and implement bespoke software to do exactly what you asked of it, in the way you specified - is astonishing. Clearly - we are a long way from that reality right now, but the fact that we are seeing the assistant of the future taking its first steps today is incredible to me.
A Human Revolution
The answer to the Tyranny of Simplicity is not, in my view, to reject the digitisation of systems and services, but to acknowledge that there is no such thing as the perfect user - and to build those systems with this fact in mind. We should strive to embrace complexity, diversity and ambiguity - to build systems which adapt to the user rather than imposing a rigid, simplified set of requirements and constraints. User interfaces should respond to the way the user thinks and moves - rather than assuming that there is one perfect design which everybody can navigate freely. Accessibility considerations should be applied seamlessly, rather than being tucked away in some hidden sub-menu.
Though we’re quite a way off from such a world right now - I’m excited to see how our interactions with modern technology, services and systems will evolve over the next few years. Certainly, I see no obvious reason why navigating modern technology should become more rigid and proscriptive than it may currently be - and though I’m not one of those breathless techno-evangelists who believes technology is always the answer, I do feel that the next few years will be an exciting opportunity for us to design and build a much more natural, expressive, and human-oriented digital world than we’ve so far been able to achieve.
Though it’s clearly unrealistic to imagine that we may end up with systems & interfaces that can be all things to all people, I do believe intelligent agents will play a huge role in the next phase of our technological evolution. Since we’re unable to design systems that suit everybody’s needs, we should instead build agents and assistants that can interpret our requests and instructions, and figure out how to get the result we want. Rather than building one app through which we expect users to input and manipulate information to achieve their goals - why not allow users to employ an artificial assistant capable of building the right software for them, at the right time? There is a paradigm shift on the horizon, I feel - and it’s my hope that the coming years will reveal a future where technology that embraces the complexity of human life is entirely possible.
Thanks for reading. In Part 2 of Shattered Silicon, I’ll discuss the problem of Broadcast Culture - where people have eschewed action for reaction.
If you’ve enjoyed this post, please consider subscribing. If you can afford it, a paid subscription is greatly appreciated - but if not, you’re more than welcome to stick around - I hope you find something you enjoy!
If you’re still here - why not check out some of my other, semi-related posts?
Hi Tom, wasn't sure whether to use chat , note or comment to check if you're OK and beavering away on new material? My progress is slow because a lot of it is project-dependant. Working on a 37-way interface analysis box using Arduino Mega. And ordered myself an Altair Duino Pro and an Amber Wyse 120 to go with it ! Maybe both will feature next year if I think there'll be interest. Anyway I hope you have a wonderful Christmas and a safe and happy and amazing 2024.
Great analysis Tom. I believe issues of cost, sub-optimal speech interaction, a fast-Internet postcode lottery, and trust, will continue to exacerbate the digital divide. Social and economic inclusivity, especially for the disadvantaged and the elderly, will require affordable, fast, trustworthy and robust speech-based human interfaces as an essential service.