About Salai
Table of Contents
What Is Salaì?
Salaì is reclaiming technology for human beings. A project of projects, an exercise in rediscovery of the joy of creation and, I hope, connection with one another, Salaì offers a challenge and invitation to imagine and actually create software that serves human needs, first and foremost. Salaì is a little bit of me. You can make it a little bit of you, too, if you want.
Salaì is striving to do something truly new, so Salaì shouldn’t be defined first by what it is not, and I am sorry for the double negative. Even so, I would draw the reader’s attention to the fact that Salaì is, counter to certain trends, named after neither a villain nor an evil device from science-fiction or fantasy books. Nor will any of Salaì’s projects ever be so-named. That should tell you at least a little bit about what we’re going for here.
Is Salaì a business? A community? A bridge to something better than (or at least a small refuge from) what we have right now, a world seemingly inundated with duplicity and rage? I offer this first attempt to make a tool and a space where we might start to answer those questions for ourselves, together.
Whatever Salaì is and whatever it might become, I hope it helps. Will you help me make Salaì into something that helps?
Read on to learn more about where Salaì comes from or click over to the Projects page to see what Salaì can look like in practice. If you like what you see, there’s no need to ask your doctor if Salaì is right for you. Head over to the Connect page and join the Discord. Hope to see you there.
A Time of Little Revolutions.
I am not technically what you would call an “elder millennial," but I am a millennial who is, somewhat against my will, becoming “elder.” I grew up on the computer and with computers. The 90s and early 2000s were an incredible, unique time to be enthusiastic about consumer technology. Every year seemed to reveal another quantum leap in capability. What had been scarcely imaginable just a few months prior would suddenly be not only real but accessible to ordinary people in their own homes (at least in middle class homes, like the one I grew up in).
Milestones of my childhood, still palpable in memory, include such highlights as witnessing an older cousin perform wizardry in MS-DOS, coaxing esoteric information, processes, or even games from the machine by means then incomprehensible to me, or dismantle a PC found on the winter roadside and bring it back to life on my parents’ dining room table. Such experiences imbued this technology with a sense of wonder, even awe, at their potential power while demonstrating that it was nonetheless knowable, controllable. I felt powerfully the potential that resided in these machines which continued to proliferate, year after year, ever more deeply into practically every area of life, seemingly without limit. That potential waited, beckoning to whoever learned the necessary language and commands, invoked like so many arcane spells to bend the machines, and by extension the world increasingly dominated them, to one’s will.
The possibilities seemed endless, and that was no mere childish flight of imagination. Revolutions were manifesting all the time. In the world of video games, a favorite pastime of my generation, I loved my NES, but I envied my friends who had a Super Nintendo or Sega Genesis. Then the Sony Playstation appeared and swept them all away by elevating video games, literally, into another dimension. Just a few years later, the Playstation 2 hailed what arguably remains, to this day, the largest single-generation leap in graphical fidelity ever seen. Such was the raw power of this new game console that Japan’s Ministry of International Trade and Industry limited exports for fear that certain middle-eastern and north African regimes might be able to use it to build advanced missiles. This was a video game console, and I diligently made sure to pre-order mine and pick it up on launch day in the US. The feared PS2-powered missile guidance systems never materialized, but the system did turn out to be another kind of secret weapon when it went on sale for about half the price of the cheapest DVD players then on the market, marking the true beginning of the end for another technology I had taken for granted in my childhood: VHS.
The world of PCs was arguably moving even faster than that of game consoles. People sometimes joked that the latest PCs were obsolete by the time you brought them home, and there was some truth to it. In the 90s and the years immediately following that decade, the ever increasing power of home computers was constantly smashing through limits that broadened the horizons of what was possible with these systems and the catalog of available applications rapidly expanded to take advantage. What was once only available to large corporations was now at the fingertips of any small business or home user. Whole new paradigms of user interaction were unlocked as graphical user interfaces, made possible by powerful new CPUs, replaced the old command lines and terminals. And, of course, the internet exploded onto the scene throughout the mid-to-late 90s, flying high on lofty promises of a more connected, informed, convenient, and democratic world.
While computers certainly continue to grow more powerful today, one could be forgiven for not noticing one year to the next as all the most important software appears to have long since been enabled if not actually written, but 30 years ago the whole world was changing right in your living room or home office and everybody knew it.
Everybody knew it, and it was exciting. It was fun. I don’t know if it is possible to convey to those who may have not lived through it the general, buoyant atmosphere of optimism that accompanied these rapid innovations. Maybe it was part of America’s long, triumphal sigh of relief following the end of the Cold War. Maybe it was the sense that we had somehow collectively arrived in what Francis Fukuyama (in)famously declared “the end of history,” a sense of fundamental social and economic finality which, even it could feel stifling in some ways, made the pace of change and the tenor of conversation around it feel somehow benign if not decidedly positive. Against this background of an almost subconscious faith in a deeper, unshakeable stability, transgression and rebellion, even cynicism, (or, at least, certain public performances of these) became mainstream. Somewhere along the way, Silicon Valley developed its now oft-quoted ethos “move fast and break things,” and why not? When you feel you can trust that nothing vital can be threatened, much less lost, there is a sense of permission and safety in breaking down the old forms. In the absence of any serious accounting for negative externalities, change for the sake of change becomes a virtue in and of itself.
This account may seem critical, even as it bears the hallmarks of my own nostalgia, but a central feature of this environment, where the system itself felt invincible, was that things tended to be articulated in the language of benefiting human beings. Having perfected (we believed) our economic and political forms, technological, economic, and social progress and disruption (always deeply intertwined) must redound to the benefit of all. PCs and the internet would unlock new modes of expression and connection to an ever-growing population of global users. They would allow you to make your artistic or business dreams come true. We would all become happier, healthier, more productive, better informed, endlessly entertained in all kinds of novel ways, and increasingly unburdened from mundane tasks, irritations, and inconveniences.
These expectations may have been naive, to say the least, but they were certainly humane. They centered on helping ordinary people and improving our collective lives. The world which has emerged since that time has somehow lost that sense of humanity.
Gods That Fail.
In the years which have followed that heady cultural and technological whirlwind a cloud seems to have descended upon the collective mind.
Perhaps beginning with the sobering shock of 9/11, the world began to seem dangerous again. As the soaring promises of information technology and especially the internet gradually came down to earth, a simultaneous process of digital enclosure swallowed up what had been the new digital commons.
Gone were the strange, earnest, often hideous but ever personal and unique landscapes of Geocities and MySpace. In their place emerged a homogenized internet, signified by the cookie-cutter Facebook profile page (a stark contrast to the Wild West that was themeable MySpace pages). We may have bought into this new paradigm because it promised, at first, exclusivity and a more streamlined way to interact and share with each other. It would not become apparent until much later that the comparatively austere and confining presentation reflected a shift in ethos which would come to define a new era of what we would eventually come to call “Big Tech.” This ethos, which has now achieved almost universal dominance, revolved around something of which we were entirely innocent. Innocent because it was, at that time, only just being invented: our data.
This ethos is more or less monomaniacly concerned with our data: harvesting it, selling it, trading in it, controlling it, analyzing it, packaging and repackaging it, and, now, sifting it into enormous neural networks to be sold back to us under the trade name “AI.” It is, by comparison and in contrast to the tech ethos it subsumed, at best, indifferent to our expression, personalities, or needs. At worst, it ruthlessly exploits these to keep us plugged into this machine which long ago ceased to serve us.
Since the launch of ChatGPT and the AI arms race that has followed, the tech culture which understands users first and foremost as resources to be exploited rather than customers (much less people) to be served has taken a new and still darker turn. Far from the expansive rhetoric of years past, which may have spoken of unlocking new human potentials and curing diseases, today’s CEOs and press releases are more prone to speak in grim, even apocalyptic terms when discussing the expected (and hoped for) impact of the latest leap in technological progress. They openly talk of destroying the livelihoods of countless millions (indeed, more than a trillion dollars worth of investments now probably hinge on whether they can deliver on that particular, horrifying promise within the span of the next few years). They publicly air their fears, apparently sincere, that their own creations may soon spring to life in the form of so-called Artificial Super Intelligence and, quite literally, kill us all. Though that seems a remote prospect, it is nonetheless concerning that many industry leaders not only seem to believe it, but some of them appear ambivalent about whether or not it would be a bad thing.
Some of this might be dismissed as “marketing hype.” Some is almost certainly just delusion. But, whatever the case, it is clear that merely criticizing this state of affairs is not enough. The supposed gods of the tech industry, who were supposed to be stewards of our shared and shining future, have turned out to be remarkably short-sighted, self-absorbed, and irresponsible, yet they wield far vaster resources and influence than perhaps any other human beings in history. To sit by and merely complain is to be complicit in this state of affairs. We must accept that this is just who these people are. Moreover, this is just how the systems and businesses they have built operate, and they are not about to change them just because we don’t like it. If we want something better, then we must take responsibility for creating it ourselves and get to work.
Why AI?
In spite of (and probably in no small part because of) being shoved into absolutely everything these days, what is currently sold as “AI” is not widely popular. The fact that something has been created, in whole or even in part, using AI can be enough to provoke deep feelings of distrust, disgust, or even outrage. The word “slop” has come into vogue to describe the soulless digital pollution that generally gets churned out by these machines.
Some dismiss this pushback as a mere luddite sensibility, a reflexive, narrow-minded resistance to change or new technology. I do not find this excuse the least bit convincing specifically because so many of the people who now find themselves repelled by AI services and their outputs were, like me, raised in an era of rapid, sustained technological change with far greater impact on our lives than what AI has yet wrought. In general, we embraced it, even delighted in it. Our frequently enthusiastic participation created the internet as a social phenomena and made the success of these tech companies possible. Now, suddenly and out of nowhere, we are supposed to have become stupid and stubborn opponents of technological progress? No. Such narratives may sooth the egos of AI boosters and enthusiasts, wounded by the invective flung at their work, hobbies, or, oh boy, “relationships,” but these reactions are not for nothing.
It should be little wonder that people have turned sour on AI when, as explained above, the tech industry at large has undertaken a decidedly dark vibe shift, deepening and making more explicit an attitude of exploitation toward users, hostility toward workers of all kinds, indifference toward real people and their needs, even verging into outright anti-humanism.
But none of this is the fault of the technology.
What is currently promoted and sold as “AI,” primarily Large Language Models (LLMs) and other offshoots of Deep Learning neural networks, is a vast, sophisticated mimic of human expression. It is created by using an enormous corpus of human created work to imprint extremely sophisticated statistical models with a representation of the patterns spanning the dataset. A bit more specifically, the models are "trained" to predict, given a string of text, the statistically most likely next word. Additional work is then done to make the outputs hopefully more useful (such as recognizing and following the pattern of a back-and-forth “chat”), and the result is the surprisingly fluent, if not necessarily truthful or coherent, “generative AI” outputs many of us are now familiar with. In essence, the current generation of “AI” is a novel, interactive representation of our collective artistic, cultural, scientific, and other expressions. It can be prompted to reflect (though not altogether reliably) all kinds of social norms, work practices, superstitions, and more. Taken in isolation, such a technology would appear to be, at worst, a benign curiosity if not an unmitigated good, a new mode for exploring and expanding our shared humanity.
It is mostly a matter of historical contingency which makes it so repellent. In the first place, it is built and maintained by tech empires in a flagrant, verging on ostentatious disregard for the intellectual property and privacy rights of ordinary people, despite those empires themselves being established on intellectual property of their own (which they, quite naturally, jealously guard and go to great lengths to keep private). It is then wielded by a remote class of billionaires and c-suite executives to try and drive new profit centers, first by polluting existing software with a deluge of junk AI features for which no one has asked and which rarely work anyway (sometimes breaking existing workflows in the process), and, second, by aggressively moving to try and destroy the livelihoods of millions, including those same people whose work they stole to create the models in the first place.
But the technology is now out there in the wild for all to use or abuse as they wish. There is nothing that says we can’t play, experiment, build, explore, and create with it in humane ways just because these companies do not even seem to be aware of the problem. We may yet hold out hope that they will be legally obliged to adopt more ethical practices with respect to the rights of those whose data they have appropriated, but, if not, that is even more reason to take whatever ownership we can of this machine built on the backs of our collective work and lives, without our consent, and turned against us.
That is what Salaì is striving to do. Rather than cede the AI landscape to those determined to make it an ideal tool of our deeper exploitation and marginalization in social and economic life, we would make AI tools that serve people in the spirit of a previous era of techno-optimism but without the illusions that made that period secretly dangerous and laid the foundations for where we are now.
Why “Salaì”?
According to Wikipedia, in 1490, at the age of 10 years old, Gian Giacomo Caprotti da Oreno, later known as Salaì, joined Leonardo da Vinci’s household as an assistant and pupil.
The details of Salaì’s life aren’t important except this: he was the assistant to the master.
We believe that AI and all technology should be used to enable and unlock human creative genius, not attempt to somehow replace it (a doomed enterprise if ever there was one). While many current AI tools seek to “de-skill” human beings or simulate their unique artistic styles, performances, or even likenesses, we reject these trends wholeheartedly. On the other hand, some AI features and tools offer to make machine generated alterations which fail to respect the human creator's voice or authority.
Technology ought to expand the scope of human abilities, not obstruct them, much less seek to remove any semblance of craft.