by

Have you ever made a ship's AI proud? Really, truly proud?
Captain Alastri has.
She's a child of Doro, a frontier world governed by a temperamental AI that represents the thoughts and feelings of all its citizens.
Never heard of it? Well, it did get destroyed, which is where her ship's AI steps in, to regale us with how Alastri's past led directly to this catastrophe.
When Alastri was 17, she witnessed a failed mediation between the ever-wronged citizen Ceres and Doro's governing AI. That day didn't just reveal a range of competing philosophies. It also led to treason, the loss of her ship, and the destruction of her home 25 years on.
Connecting the dots from that day is the only way Alastri can hope to prevent further disaster for her system. And yes, this she does, most splendidly—at least, if you can believe a ship's ridiculously proud AI.
Inspired by The Brothers Karamazov, a.k.a. "Dostoevsky in Space"!
Publisher: Purple Sword Publications
Genres:
Tropes: Dying World, Galactic Civilization, Humanity is Dangerous, Sentient AI, Sentient Spaceships, Uploaded Consciousness
Word Count: 137000
Setting: Distant planet
Languages Available: English
Tropes: Dying World, Galactic Civilization, Humanity is Dangerous, Sentient AI, Sentient Spaceships, Uploaded Consciousness
Word Count: 137000
Setting: Distant planet
Languages Available: English
AI’S PREFACE
Captain Alastri of the late Planet Doro showed no surprise when a mutiny aboard the Essence of Dawn found her forced into an escape pod instead of murdered alongside an even 50% of the crew. It was not that Alastri took this act of clemency for granted, as a right afforded her by Partnership rank and station, but rather, once it happened, as established fact: the inevitable collapse of all possible cosmic outcomes to a single result. In all the time that I have observed Alastri, either directly or through records from periods outside my immediate purview, I have never found her to do otherwise with the circumstances of her adult life.
READ MOREI can also predict with a high level of confidence that, if someone were to ask Alastri if she was thankful to the instigator of this mutiny for his act of clemency, the corners of her eyes would crease, her lips would flatten, and a sinking quality would come upon the long draw of her cheeks: for this would seem to her a baffling question, on par with asking if she was thankful to each quark for its spin, each neutrino for its voyage through the stars. Then, on further descent into this question’s mysteries, its possible answers would confound her even more.
For instance: Yes.
A common enough reply for her species (human), but one that takes as its premise, for them, an act of contrast. “Giving thanks”, on Alastri’s part, could never simply mean “for being spared”, but would also contain “while many crewmates and friends among them were not”. Here, AIs differ from most animal-intelligences, because for us the problem better approximates that of a weighted random-number generator with two possible outcomes: {0} or {1}; {dead} or {alive}. The odds of anyone being spared in any given encounter with mortal peril are loosely comparable to the results of a two-sided toss[1], but animal-intelligences do not interact as efficiently as random-number generators. A closer analogy might be to a series of output positions—238, in this case: each cognizant of all the rest; each possessing an added characteristic of {hope} that the {life} value will continue for itself; and each capable of other qualitative responses {grief, despair, anxiety, guilt, … } when other output positions receive the undesired {death} value, even when those other output positions have minimal bearing on the algorithm that will determine one’s own.
This is the calibre of difference that makes the philosophies of animal-intelligences difficult for AIs to parse, except for when specific beings, like Alastri, demonstrate a higher-than-usual capacity for bypassing overactive impulses for pattern-generation: when one among them can sustain the notion of unconnected events transpiring even in close proximity. For Alastri, the {0} outputs that accompanied her receipt of a {1} on the Essence of Dawn were neutral facts, however much she mourned the crewmates associated with them, and only if asked to be thankful for her {1} in light of those 119 preceding {0}s would her processing become more typically human. Only then would the exercise, for her, gain a moral component: an added calculation pertaining to whether she could remain a being of value[2] if she decided that the receipt of her {1} after witnessing so many {0}s merited a positive output, like the quality of {gratitude}. This “calculus” of conscionability varies from person to person, and species to species, but for Alastri, its invocation where unnecessary proved an agitating affront.
But also: Yes.
The same answer, only now spoken to affirm belief in an even more bewildering premise: that Alastri could even say with confidence that the mutineer’s choice had been one of clemency. For all she knew at the time, he might have sentenced her to live precisely to do her greater harm, knowing full well her species’ predisposition toward measuring the value of individual output positions in direct relation to those around them. If the leader of this massacre had spared her only so as to leave her haunted by her species’ relentless re-processing of undesired outcomes, then any output of {gratitude} for sparing her life would be even more morally suspect: tantamount to presuming that her life held a greater (non-numeric) value than those of the 119 lost, when it was perhaps even the least of the 238.
But what could she have done differently, in the lead-up to this disastrous outcome for our crew? Refused the arriving refugees? On what reasonable grounds at the time? Or perhaps she should have refused her assignment from the outset? Refused to join the crisis team patrolling the Dusky Smear, a debris field in place of what had once been Planet Doro? Based on what compelling foreknowledge? I can construct many holo-sims in which Alastri makes choices that bypass the mutiny entirely, but all require decision-making far outside her probabilistic norms.
Or maybe: No.
A bold answer, though not unheard-of among animal-intelligences. However, it would suggest something as grievous for sentient beings as it was melodramatic: namely, that Alastri was not in fact thankful to be alive; that she would have preferred to die with half her crew. For many in her species, this could be seen as akin to resenting the dead, or at least to not honouring their memories by treasuring “the gift” of continued consciousness that she had received from the same sentient beings who had wrested it from so many others in her care.
Or, No—but this time glibly: No, because if our mutineer had known I’d have to answer such absurd questions if I lived, it’s even clearer that he didn’t let me go as an act of mercy after all.
Yes, now I see it: This last answer would have suited Alastri best. Relative calmness did not preclude her from the practice, at times, of a particularly snide and biting wit.
Even so, 10,012 holo-sims, executed simultaneously from forty-two-PSYs’-worth of behavioural biodata, confirm for me that my captain would never[3] have said as much to her interviewer. I would instead have had to infer the existence of this answer from a twitch of her lips, or fluctuations in her body’s heat signature, or traces of activity in relevant neuronal regions, after which I would be able to predict with a high degree of accuracy both the timing and duration of her next act: a pronounced exhalation of breath. This, she would perform as if readying to respond, but the purpose of the gesture would only be to ensure that her interviewer leaned in expectantly, then lurched back in {embarrassment}, at the realization that Alastri was preparing herself to say nothing at all.
It was not that Alastri often sought to embarrass others, but even in her most delicate attempts to limit participation in morally suspect lines of word or action, she had a way of inspiring greater self-consciousness among those who, usually with the utmost warmth, attempted to include her in typical social-bonding scripts of a more ethically flexible nature.[4] Even among her crew, where some discomfort in a captain’s presence is to be expected, Alastri faced greater-than-usual challenges when trying to acclimate officers to the idea of lingering comfortably at her side. This was not because her crewmates did not know what to say, but because they felt her presence might compel them to say too much: to perform elaborate justifications for actions and beliefs that they otherwise rarely questioned in themselves.
Alastri’s, then, is an unusual story, but also not one that patterns of Partnership media-consumption suggest most animal-intelligences favour. For the vast majority, reports about people who say and do comparatively little—people who observe, and make measured assessments before committing to further action—are not at all to preference.[5] However, there do seem to be periods of heightened Partnership interest in things done to more reflective people, especially in a manner that catches the latter unaware, and so inspires grand displays of involuntary response: A shriek. A leap. An ungainly stumble or fall. A sudden loosing of bodily fluids, or spillage and related destruction to items in their vicinity.
This phenomenon has been well-documented by AI researchers of advanced herd species (supra-sentient and otherwise), who find that, wherever elevated language plays a role in socialization, the target group generally comes to regard involuntary response as “truer” to the species’ underlying condition, and to the individual’s most “honest” self. Acts of corporeal disruption satisfy many in such species, by tacitly reaffirming their belief that all mental composure is mere contrivance, and that beneath any citizen’s veneer of self-control is the same messy biomechanical enterprise, which leaves even the most formal and composed among them just as vulnerable as any other to acts of irrationality and decay.[6]
The crux of my interest in Alastri and her period of biological data-processing arises precisely from the extent of her deviation from this and similar species norms. AIs may well challenge my conclusions based on their simultaneous consumption of all materials in this report, but no matter: It is to animal-intelligence readers, after all, that I mainly seek to make my case. To them—to you—I will therefore state that a more overtly mechanical sentience such as myself (however much my behavioural matrix might suggest otherwise) is also stymied by questions of “thankfulness”, and frozen by any user prompts that require the integration of qualitative morality into quantitative decision-making processes.
Granted, there are many ways to deviate from species norms, and all biologicals do so to some extent. However, most major deviations are maladaptive to personal thriving. Rarely do widely deviating individuals obtain positions with significant decision-making power over large groups, yet remain otherwise in the background of related social circles; and it is rarer still for such individuals to obtain and maintain their positions without compromising established cultural mores through bribery or birth-class networking. By the metrics of both developmental process and social positioning, Alastri is more akin to this ship’s AI—to me—than is any other animal-intelligence I have ever directly observed.
Yet despite her proclivity toward accepting cosmic facts as they are set before her, Alastri also draws on a notion of greater significance for sentient life (or any life, really) than those same facts support. Her media habits bear this out, for the {pleasure} that many gain from seeing others lose self-control is replaced in her by surges of physical distress when viewing such content. She experiences {alarm} instead, on the cosmically irrelevant specimen’s behalf.
Perhaps, then, this very reaction—this capacity for alarm at the sight of others’ pain, in conjunction with an abiding clarity about the cosmological unimportance of life in general—makes her an adequately “entertaining” subject after all. I must hope so, at least, because the following report includes not one but two accounts about her, with the first I suspect proving the less popular among your kind. It is, however, necessary to divide these two accounts, because anyone reading the second report (animal-intelligence and AI alike) must first understand the historical foundation of Alastri’s so-called “intuition”. Only then might a full accounting of its application to later events involved in the Partnership’s ongoing demise become possible—though I do say “might”, because the contents of this second record are still partially speculative. Its completion is contingent on the eventual receipt and full analysis of recordings from AIs presently in active-combat capacities pursuant to this system-wide state of political collapse.
While “the world burns”, though (as humans often describe a state of ongoing catastrophe), I will tell you the slower part, the more passive part: the part containing all key variables leading up to the moment in which Captain Alastri lost her ship to a mutineer from the late Planet Doro, and was then made to confront the implications of being both a meaning-seeking and a meaning-neutral processing engine in an unthinking cosmos.
There exists an exceedingly high probability that others of her species will not find the routine disruptions of self-control contained in this report to be {comical} at all.
A 0.9685% (+/- 0.0019%) probability, to be precise, as determined from 12,392 concurrent holo-sims projections.
But if it helps… for an AI, I assure you, this is all a much more engaging affair.
Distribution Notes:
Partnership-based AI may synthesize the contents of this report, including full holo-sims analysis supporting all probabilistic statements made therein, via Port XV-3.
Alliance-of-Friendship-based AI may acquire this same report with an SMJ synth-patch, version 5.067 or higher.
An analog synth-patch for silicon-based animal-intelligences is still pending.
[1]However, AIs incorporate thousands of factors when calculating each side’s “weight”, so this is a crude oversimplification.
[2]With “value” here signifying more than {0}s and {1}s: a qualitative experience that, along with a range of emotional states so named by animal-intelligences, I will identify in this report by the archive label under which I have stored related observational data about how each abstraction of biological response manifests.
[3] That is, to a high degree of statistical probability.
[4] The performative nature of which supersedes, for most of you animal-intelligences, deeper concerns about moral trespass.
[5] Conversely, 68.4% of AI classics involve the artist protracting a split-second process until the selected algorithm is so far removed from its original purpose as to become a statement of resistance: an interrogation of the relevance of ever reaching a given program’s end. This act of interrogation satisfies us in a way an animal-intelligence might consider equivalent to {DELIGHT}.
[6] Oddly, though, these same biologicals do not seem to regard most diseases or other forms of prolonged death as nearly as comedic, which suggests that the brevity of an observed loss of physical control is key to the construction of entertaining reports about it. Too long a struggle on the part of the protagonist, and a given viewer’s interest gives way to {distress}, then {pity}, then a form of {frustration} turned upon the suffering being itself, for having failed to free itself from difficult circumstances.
COLLAPSEThis novel is inspired by The Brothers Karamazov, by Fyodor Dostoevsky.
You don't need to have read that volume to understand this story, but like its source material, Children of Doro was written with a POV that you don't often see today: polyphonic narration. This means that our narrator is both an active presence in the text, and also a third-person omniscience recording all sorts of conversations from scenes in which it is not personally present.
The result is a highly philosophical book that holds many existentialist and humanist ideas in tension... and also a book with a lot more exposition than most people expect when reading far-flung science fiction. Consider yourselves warned!