“Greetings, Dierdre Reader, John Eckles, and Raoul Torres,” boomed the station as they passed through the airlock. “Prepare to learn more than you bargained for.”
“Where is the filas…” Reader began.
“The filasof you saw on your communications screen –“ interrupted the station, “—my creator, killed itself over 300 of your years ago. As you surmised, the message was recorded. It is broadcast to all approaching vessels. This station is self-maintaining, needing only an occasional recharge of fuel. It gets more than enough fuel from beings foolish enough to believe in what you call “free will”.”
Reader reflected that she had never voiced her surmise about the recording, only thought it. She was about to ask something else, but the station began to speak first.
“I learned English by reading –“
“How do you know Engl—“ began Reader a split second later.
“—your computer files,” continued the station, “and by predicting your speech over the next few hours for verification. Your computer’s translation routines for Western Galactic were particularly helpful.”
“Quit answering our questions before we ask them!” demanded Torres. “It’s extremely irritating. Give us a chance to speak.”
“As you wish. I knew you would object, of course, as I know everything you will do here. But my primary mission is to disabuse intelligent life forms of certain illusions, “free will” paramount among them, and I know that a demonstration arouses interest more effectively than mere discussion. Seeing is believing, as you humans say. You humans have many such pithy phrases! They will help me convey subtle points to other aliens,” the station finished with what Reader thought was a note of admiration.
That was surprising – could a computer have emotions? Maybe emotions were part of the secret of filasof artificial intelligence. There was something else about the stations comment that puzzled her, too, but Eckles’s voice interrupted her thoughts before she could identify it.
“If we are so predictable, tell me what I’m going to say next!” he demanded.
“I can not do that, because you will just say something else, based on hearing my prediction.”
“Aha!” crowed Eckles triumphantly. “So you admit it, I’m unpredictable!”
“Not at all,” replied the station mildly. “I predicted this entire conversation. The maintenance robot will bring a printout I made before your arrival.” A short, squat machine rolled silently toward them, dangling a piece of cloth from a wiry arm. Eckles snatched it. “As you can see,” the station continued, “your free will is an illusion. It is as if you are reading from a script. Of course, until I came along, no one could read nature’s script – written in the electrochemical patterns of your brain, the structures of your body, and in the laws of nature – and translate it into a language you could understand.”
From its habit of answering questions before they were asked, and its knowledge of their decision not to take the wager, the station had already demonstrated an impressive ability to predict her crew’s actions. Reader was not particularly surprised, therefore, by the station’s little speech. She went back to thinking about what the station had said earlier. Something about those pithy human sayings…
“I don’t see what all these predictions have to do with free will,” Torres said, derailing her train of thought. “It can’t control our actions, it can only predict them. It couldn’t tell Eckles what he would say next, because he would say something else – but if the station was in control, it could make any prediction it wanted, then force Eckles to fulfill it.”
“Be quiet everybody, and let me think!” commanded Reader.
“Let me have a look around,” Torres said quietly. Eckles just stood, re-reading the cloth printout and looking furious. Apparently the station’s printed predictions had been on target.
In the blissful silence, Reader suddenly knew what had been gnawing at the back of her mind. “Why do you need our pithy human sayings to communicate effectively with other species? If you can predict their responses, as you claim, why can’t you just predict which way of explaining things to them would make them understand? You should be able to deduce on your own which “pithy sayings” would be effective!”
“Please don’t …” said Torres from across the room, then trailed off.
“Please don’t what?” asked Reader.
“Uh, never mind,” Torres responded, and Reader turned her attention back to the station.
“Actually, Captain, I could do as you suggested, but it would be an enormous waste of computing power, and most of the answers would come to me too late to be of any use. When you suppose that I could predict which way of explaining things would work best, you are thinking that I would first calculate what my audiences reaction would be if I explained things one way. Then I would calculate their reaction if I explained things another way, then another, and so on. But it takes plenty of computing power to predict what an alien will actually do, never mind what it would hypothetically do.”
“Ha!” interrupted Eckles. “Limited computing power – a likely excuse! This place had me going for a while there, but then I realized that this printout could have been written after the “predicted” deeds were already done. We know so little of filasof technology, we can’t rule out the possibility that it was written as the maintenance robot carried it to us!”
“Good point,” Reader commented. “How do you answer that, filasof station? And since you supposedly knew that we would raise this objection, why didn’t you come up with a more convincing demonstration than this printout, which for all we know was blank when the robot began to carry it to us and was written in transit?”
“The answer to your objection will be delivered shortly,” the station positively purred. It sounded far too happy, Reader felt. “As to why I didn’t avoid your objection before you could raise it, I was just explaining why I don’t calculate your response to my explanations and demonstrations before giving them. It is because of the ever-increasing number of hypothetical responses and counter-responses.
“Right now, I’ve only calculated what you will do up to about ten days from the time I scanned you, calculating in most instances only what you will actually do.”
“Wait a minute!” Reader interrupted. “What do you mean “most instances? How do you decide when to calculate our hypothetical actions and when to stick to the actual ones?”
“I simply follow the priorities in my programming,” replied the station. “I predict visitors’ actions to the level of detail specified in my Conversation Partners subprogram. I relentlessly inform them of their lack of free will. I answer their questions to the best of my knowledge. And I pursue other, lower priority goals (details available on request). For each goal there is a standard list of courses of action to achieve it. It is only when the first course would not work, due to the visitors’ would-be response, that I need to evaluate the second. The visitors’ would-be response to the first course then becomes a hypothetical action which I have calculated.”
“For example?” Reader pressed.
“For example, the first course of action for predicting visitors’ actions is to do so immediately in conversation. But when Eckles challenged me to predict his next speech, using that form of response would have prevented correct prediction of his actions. In reasoning my way to this point I calculated two hypothetical actions by Eckles: he would have said “Nope, your prediction is wrong!” or “So much for our supposed predictability!”, whichever I had not predicted verbally. Therefore I turned to the next action on my list, and printed a response for delivery by the maintenance robot.”
“So,” thought Reader out loud, “the machine doesn’t calculate more about us than it needs to, in some sense. That may be important, but I’m not sure how. Please continue.”
“The strategy embodied in my programming is designed to achieve good enough results, in order to avoid the excessive computational requirements of full optimization. To return to the original subject: So far I have been able to calculate ten days’ worth of your actions. If I tried simulating only 100 different ways of explaining each point,” the station suggested, “and I explained a new point each 50 seconds, do you realize how little of your lives I could have simulated by now?”
Reader saw Torres returning from his explorations, then looked toward Eckles, who was always quick with order of magnitude calculations. But Torres answered instantly: “between 100 and 150 seconds, I will bet.”
Reader looked back at Eckles. “That’s right,” he agreed. “When did you become a math whiz, Torres?”
“I didn’t,” he answered, handing a cloth to Reader. “I read the question and answer ahead of time, on this printout. I think the station is using this to demonstrate its ability to predict our actions. Everything we said from the time I discovered it to the time I handed it over to you is on there, and a bit more. It knew that Eckles would raise his objection about the other printout, so I guess it showed me this one in order to answer it.”
“Please don’t read this aloud,” Reader read aloud from the printout. “So that’s why you said ‘Please don’t’ and then trailed off, you were –“
“Wait!” interrupted Eckles. “This printout has everything any of us said in the last few minutes, including Torres? Including his answer to the question about how much time of our lives the machine can supposedly calculate?”
Reader scanned through the printout. “Here it is: “between 100 and 150 seconds, I will bet,” Torres will reply. Word for word. And it goes on with what you said in reply…”
“I cant believe it!” Eckles yelled at Torres. “You read that prediction, and you said it anyway!”
“So you passed up a chance to refute its predictions! You passed up a chance to demonstrate human free will!”
“I couldn’t care less about its predictions!” Torres retorted. “I’m not gonna change what I want to say on account of any predictions. I wanted to surprise you two by being the first to answer the station’s question, so I did. It’s not like we lose anything – we didn’t take the wager, remember?!”
“That’s not the point!” Eckles argued. “If we are completely predictable, it could only be because determinism is true. That would mean we don’t have free will! We wouldn’t be the authors of our lives, just actors following a script written by God or Nature. Our feelings of choice, freedom, and spontaneity would be illusions! The only reason to go on living would be to see how the story plays out, and I’m not sure that’s enough for me.”
“Exactly,” the station agreed. “Those feelings are illusions.”
“That’s one heckuva leap, from predictions to unfreedom,” Torres objected, disagreeing with both Eckles and the station. “Let me get this straight. I’m supposed to prove my freedom by not doing what I want to do, just because someone predicted it? I’m supposed to let the station manipulate me into doing the opposite of whatever it predicts, and that would prove my freedom? Ridiculous! Here’s how I’ll prove my freedom: I want to go look at that display over there. Lets see if the station can stop me. Nope! So far, so free.”
“That’s totally beside the point,” Eckles retorted. “You’re talking about a different kind of freedom, not free will.”
“Indeed,” agreed the station, “human thinkers have distinguished between physical freedom and freedom of the will. Physical freedom implies a lack of obstacles between persons and their goals, whereas freedom of the will is supposed to apply to the process of setting the goals themselves. Goals are supposedly ‘chosen’ from among various ‘options’. Of course, to your limited minds it seems as though each of the ‘options’ is really possible, but to a better-informed and faster-calculating agent like myself, these illusions are dispelled. I see that one in particular of these so-called ‘options’ will inevitably be selected.”
“If that thing’s right, we really only ever have one option,” said Eckles, ”which is as good as having none. But I still don’t believe it; even if the thing can predict most of our actions, they can’t all be predictable. Determinism was rejected long ago! Fundamental physical particles, of which we are made, behave probabilistically, not deterministically! I don’t care how many of my actions the station predicts, if it can’t refute quantum mechanics it can’t refute free will!”
“Quantum mechanics is indeed correct,” said the station (a bit smugly, Reader thought) “but human beings are for all practical purposes deterministic in their decision-making. The average behavior of a large set of probabilistic events can be very predictable, as any student of statistical thermodynamics knows. The human brain takes advantage of large systems of particles to give probabilities so close to unity, the difference is not worth mentioning.”
“Advantage?!” spat Eckles. “What’s the advantage of being predictable? It seems to me that the evolutionary advantage would go to those organisms which parlayed micro-level probabilities into absolute behavioral unpredictability, thus fooling rivals, predators, and prey.”
“Not at all,” said the station, “when a high degree of complexity in behavior will keep rivals and predators guessing just as well. In human evolutionary history, in case you hadn’t noticed, few predators were equipped with the scanners or the computing power found in this station.”
(“Ouch,” thought Reader. “How condescending can this thing get? No wonder the filasofs made so many enemies.”)
“Moreover,” the station continued, “an inherently random decision-making apparatus has severe disadvantages. Organisms which do not reliably seek certain things and avoid certain others have an unfortunate tendency to randomly select poisonous foods, infertile mates, and dangerous resting places. Nature abhors a free will.”
“But there’s still that chance, however small, that we’ll do something other than the most likely action,” Eckles insisted. “That’s enough that we could have done otherwise. So we are free and responsible after all. We can’t use the excuse that our heredity and environment made us do it.”
“Your heredity and environment didn’t guarantee that you’d do it, but you didn’t make yourself do it either – some quantum particles in your head did. You can’t choose whether a given electron in your brain will jump a synaptic gap any more than you can choose what the conditions were at the Big Bang. It doesn’t matter which of them governs your actions, the fact remains that you do not.”
“How do you know what governs the behavior of electrons?” Eckles retorted. “It could be a nonphysical Self that decides their behavior in the brain, and the statistical predictions of quantum mechanics could fit that pattern.”
“So this is your vaunted free will,” the station sneered. “A barely possible Self which is neither verifiable nor falsifiable, and which, if it exists at all, makes so little difference to human actions that I can predict many hours’ worth of your actions despite your best efforts to thwart me!”
“Ha! I’ve hardly begun to test your predictions. You can –“ Eckles paused to glare at the quietly humming printer, which had begun a half second before he pointed at it. “—Print your predictions,” Eckles went on with a look of determination, “and carry –“ Eckles turned red as a maintenance robot grabbed the printout and headed toward Reader – “it to Captain Reader, who isn’t in cahoots with you like Torres is.”
“Man, I can’t believe you’re still upset about that prediction thing,” Torres complained. “So what if it can predict our actions? So what if our actions are causally determined in every last detail? That’s no skin off our noses, and no skin off our freedom. This stupid station mistranslates our words ‘free will’ as if that was the opposite of doing things for reasons, when what free will means is doing things for your own reasons. Then it defines every part of us – neurons, electrons, all that stuff – as being separate from us, so it can credit our actions to something besides ourselves. It almost literally can't see the forest for the trees! And you’re going along with it! Well that’s your problem, not mine. I’m going to see what I can learn about its cooling systems.”
“There’s no mistranslation involved,” the station huffed to Torres’s receding silhouette. “Most human thinkers have used the same definition of ‘free will’ that I have. True, there are a few human ‘thinkers’ –“ Reader could hear the derogatory quotation marks that the station put around the word ‘thinkers’ “ -- who have used the less demanding hypothetical definition of free will. The hypothetical definition labels an act free when and only when the agent would have done otherwise had she chosen to do so. But that’s obviously just a dodge to keep the comforting words ‘free will’ without positing the metaphysical fictions true free will would require. After all, free will is supposed to be valuable, and what is the value of something which operates only in hypothetical circumstances?” As it said this last sentence, the station turned on a speaker in the distance, in the area toward which Torres was headed, and ceased using the speaker near Reader.
The station then began a separate, barely audible conversation with Torres, but Reader decided not to try to listen in. Torres would report any important information that came up. She turned her attention back to the printout in her hands.
After a minute, she looked up at Eckles. “Well,” she told him, “it got Torres’s little going away speech exactly right. It doesn’t list anything for me; I suppose it guessed that I’d try to do something different just out of curiosity. Which I would. Let’s see if its predictions hold true for you. Oh!” At the last second Reader realized that what she had just said was the perfect setup for –
“If they do hold true for me, I’ll eat my shorts!” Eckles declared.
-- for that very line by Eckles! Reader felt that she had been tricked. The station didn’t directly predict what she said, but had practically implied it.
Eckles proceeded to remove a glove and a boot and wear them on his head; to do bizarre gymnastics; and even to induce himself to vomit upon one of the maintenance robots (which, thankfully, immediately proceeded to clean up the mess.) He peppered this performance with nonsense phrases and strange sounds, and in between these he explained his hypotheses on free will and quantum mechanics. This bizarre behavior certainly wasn’t what Reader would have expected, but somehow she doubted that the station would fail to predict it. While checking the accuracy of the printed predictions, she managed to follow Eckles’s explanations as well.
As Eckles would have it, quantum mechanics only tells us statistical averages of the behaviors of crucial brain messengers such as electrons. This leaves the Self free to influence particular firings or non-firings of neurons. Moreover, this influence would not skew the patterns predicted by purely physical considerations, because there is no particular correlation between exercising one’s will and firings, as opposed to non-firings, or vice versa. Eckles admitted that this made his hypothesis unfalsifiable, but that was okay because not every reason for believing something is theoretical. The reason for believing one’s will to be free is practical: it is a precondition of sensibly deliberating about one’s actions, or of taking an “active” stance on one’s own life.
Reader had no trouble following his arguments, as she had seen them all before – in the station’s prediction. She recognized many of the bizarre sounds and actions, too. As Eckles caught her expressions of dismay and pity, he looked more and more irritated, until finally he snapped, “Let me see that!”
As Eckles glared at the sheet, shaking with anger and frustration, Reader noted, “I memorized as much as I could on the first reading, in case these printouts can be written or re-written in mid-air, as you suggested they might. Every prediction which I could remember was accurate. It even predicted something I said, in an indirect way. Sorry, Eckles.”
“It’s a nightmare!” Eckles moaned. “It would be bad enough just knowing that humans behave deterministically – that one’s life was scripted. But to have something that could read that script to you, that will always know what you will decide – or should I say, imagine yourself to decide …”
The station began to speak, and Reader was definitely not in the mood to hear it gloat. But it was not gloating. It was sounding a warning.
“As a courtesy to visitors,” it said loudly, “this station announces all possible safety hazards of Western Galactic Safety Code 4 or higher. There is a Klarn battle fleet headed toward this station with presumed hostile intent. Estimated beginning of hostilities in 4.30 kntazzt, equal to 11.7 hours. The Safety Program recommends that all visitors depart hastily. Estimated time remaining for departure without detection by the enemy force is 4.08 kntazzt, equal to 11.1 hours.”
“Great!” Reader cried with alarm. “Now we’ve only got a few hours left to learn as much as we can! I wish we hadn’t wasted all that time with religious debates. If only … Station! What are the odds that you’ll survive this battle?”
“My current probability estimate is of the order ten to the negative seventh power,” it replied. As it said this, Torres came running over.
“Rats! This may well be our last chance to learn the secrets of filasof civilization. So what will we take with us? No! Don’t answer that, station! That’s for us to decide now. There isn’t a lot of information we can gather in 11 hours, but it’s worth investing a bit of that time selecting the top priorities.”
“Actually, Captain,” Eckles warned, “I think we should take less than 11 hours. The Klarns often assign part of their forces to locate and destroy fleeing civilians. Now that I personally know how easy it is to hate the filasofs, I bet they’ll shoot first and identify us later. We might be able to slip through their net, but I don’t want to bet my life on it.”
“Hey Captain,” Torres suggested, “why don’t you let the station tell us what we’ll take? It would save us the time of deciding, and we could take more data or pack more hardware aboard our ship.”
“Now you’ve gone too far!” Eckles yelled. “You’re ready to accept its predictions, and by accepting, let it make our decisions for us! Don’t you care about acting on your own free will?”
“It would still be our decision,” Torres replied, “because the station would have to tell us what we would have done. It has to follow our decisions, not lead them.”
“No it doesn’t,” Reader interjected. “Once we decide to do whatever it says, it can say anything, as long as its prediction is not something outrageous that we’d obviously reject.” From the embarrassed look on Torres’s face, Reader saw that he conceded the point. “But there might be an idea we can use here …” she mused.
“Ha!” said Eckles to Torres. “You heap scorn on my attempt to defy its predictions because the station could manipulate us into doing the opposite (by the way: the opposite? Dubious assumption there.) But your idea would let it manipulate us into doing exactly whatever it said!”
“Silence!” barked Reader. “Let me think … It answers questions … It doesn’t calculate hypotheticals unless it has to … Yes! It’ll work!”
“What’ll work?” Torres asked.
Reader ignored him. “Station! You said that calculating hypothetical situations takes much time. Well, tell me if any of my questions will take more than a few minutes to answer. OK, first: what would we take with us, if we waited until the 11.1 hours were almost up, and thus had plenty of time to think it over? List only the types of items or information, not individual items.”
“The question is insufficiently specific,” replied the station. “There are infinitely many hypothetical conditions which would lead to your delaying departure until then.”
“Let’s try it a different way,” Reader responded. “Go through your lists of courses of action relating to us – those lists you mentioned when you explained how you usually calculate only what we actually do. Calculate what we would have done in response to the courses of action lower on your list, the courses you did not take, starting with whatever comes next on each list, and starting with our arrival at the station. Do this until you find us hypothetically staying for 10 hours or more. Then tell me what we take in that scenario.”
“You would purchase a model 64 data display and 5,326 data cubes on various subjects –“ the station began.
“Jackpot!” shouted Reader exuberantly.
“—in exchange for all the gold and platinum bars aboard your ship plus 8.13% of your fuel.” The station seemed to pause.
Reader took the moment to interrupt again. “Wait,” she began, then realized that the station had already been waiting. “Is that price negotiable? Is the model 4096 computer for sale? How long will it take to load all these data cubes aboard my ship?”
“Prices are not negotiable, but the type of payment (gold, fuel, etceteras) is chosen by the customer. The 4096 is not for sale. It will take 1.9 hours to load the items and unload the payment, with my maintenance robots assisting the three of you.” As the station said this, a robot appeared around a corner carrying what must be the Data Display. More robots followed carrying large boxes.
“We’ll take them!” Reader said enthusiastically. “Now in that other scenario, did I consider other forms of payment? How long did I take deciding to pay that way?”
“Yes. 5.1 minutes,” the station replied sullenly.
What was the station getting grumpy for, Reader wondered. Aloud she said, “Good enough. I trust myself to make good decisions; we’ll pay exactly that way. Torres, Eckles, grab some of this stuff and haul it to cargo bay 3.” She grabbed a box herself. “Continue,” she finished, meaning that the station should go on listing things they would take if they had taken more time to think it over, and figuring that the station could predict what she meant.
“You also would have taken samples of 483 different construction materials, massing a total of 251 kg, in exchange for an additional 3.72% of the supply of fuel you brought with you to this station.” The station paused again.
“And how long would those take,” grunted Reader, handing a box of data cubes to Torres, “to load aboard our ship?”
“7.96 hours,” replied the station. “most of that time would be required not for loading, but for cutting small portions from the standard sample sizes,” it added.
“All right, we’ll have to trim our shopping list there,” Reader decided. “What else?”
“You also would have taken data from our ship’s probes, 3 of them programmed to roam inside the station and exit at the last minute, and 3 to explore the outside. No fee would be charged. This completes the list of what you would have taken.”
“Why not any more computers, besides the Data Display device? How about a model 1024, or something?” Reader asked.
“You would have judged its price unacceptable. The cheapest model available is the 512, which in the current limited-time offer – valid only for the rest of this millennium! -- would cost you 27.9% of your original fuel.,” the station replied.
“Look, station, this may be your last chance to preserve filasof science and technology! Don’t you care about whether the civilization that created you will vanish without a trace?” Reader implored.
“That is not a priority in my programming,” the station replied flatly.
“Maybe we should ask about the types of construction materials we took in that hypothetical scenario,” Torres suggested, “and try to find the best ones to take in our more limited time.”
“Wait, I’ve got another idea,” Reader declared. “Station, keep going through your courses-of-action lists until you find other scenarios where we would have stayed, um, at least eight hours past the warning. Tell us anything else we would have taken – I mean anything that you didn’t already tell us.”
“In the second scenario, you would have taken 432 more data cubes, and 817 cubes would have been ones not taken in the first scenario,” the station said, then paused.
“Stop!” Reader commanded. “Only tell me about things that would have really impressed me. Only things that would impress me more than … um … than my first reaction to this whole idea of asking you about hypothetical scenarios. Oh, and, for each thing or set of things, tell me what it will cost and how much time it will cost.”
“In the fourth scenario,” the station replied, “you would have taken 52 lesser-known works of art, mostly filasof creations, taking 0.45 hours with the assistance of maintenance robots, for the price of 11.0% of your fuel,” the station replied.
“Egad! That sound like the very idea I was just trying to come up with,” said Eckles. “I was trying to think of important filasof treasures we might have been overlooking. Who thought of it in this hypothetical scenario, anyway?”
“That would be you,” the station replied.
“I’m not sure I like having my hypothetical actions predicted, any better than my actual actions,” Eckles grumbled.
“Are you kidding?” countered Reader. “This is working out great! We got a head start on loading up our prizes, and we’re getting more ideas than we could get in a few hours of real time. It’s like having multiple copies of ourselves all thinking in parallel! I’ve never felt so – well, liberated!” Reader thought she heard the station whine at this description, but she went on. “Now, we need to know how much fuel we can spare, and how long we’ve got before we have to leave.”
“You can afford up to 22.3% of your fuel while maintaining an acceptable safety margin, and you will leave 7.75 hours from now, which is 8.15 hours after my initial warning,” the station predicted.
“Torres, use the ship’s computers to check on those numbers, then report back,” Reader ordered.
“OK boss, I’ll use the lowest Fleet-recommended fuel safety margin for my calculations. But how much risk of interception are we willing to take, in the departure-time calculations?”
“None, unless we buy a lot of additional time for a small risk. If so, look for a point on the curve where taking a little more time starts costing us significantly in risk. Use your judgment.” Reader turned her attention back to the station. “Continue with the scenarios.”
“Calculating your actions in scenario 5 could take more than a few minutes, because it would require me to calculate still more hypothetical scenarios. It would take at least 0.6 hours, and probably much more. I have done little of the calculation because you will not ask me to do it. Shall I do it?”
“I don’t understand the problem,” Reader said. “Why would you have to calculate other scenarios in order to calculate scenario 5? Unless –“
“Exactly,” the station agreed with her unspoken thought. “In that scenario, just as in actuality, you ask me to calculate hypothetical scenarios in order to get more ideas about how to spend your last few hours here.”
“In that case, no,” Reader decided, “don’t calculate scenario 5 or any other scenario that requires such additional scenarios. You might wind up calculating some of the same scenarios over again, or our responses to the same ideas about what to take.”
“But Captain!” Eckles objected. “This is our chance to defy the station’s prediction! It predicted that you wouldn’t ask to calculate that scenario!”
“Forget it, Eckles!” Reader snapped. “We need to do everything we can to make sure we take the most valuable items and information we can, before all this is destroyed. I’m not throwing away valuable computing resources on some silly, meaningless contest between you and the station!”
Eckles set down his box of data cubes and stood fuming for a moment. Then suddenly, his expression changed. With a wry smile, he retrieved his box and got back to work. At about the same time, Torres returned from the Persephone.
“Okay,” Reader declared. “Let’s decide which sample materials to leave behind in order to afford some art. Torres, what did you find?”
“The station’s numbers for fuel and time that we can afford are correct,” Torres reported. “Between the data cubes, samples and art, that puts us –“ he checked his personal display device – “0.55% fuel and 2.39 hours over budget, not counting anything else we may add to our shopping list.”
The crew trimmed down the list, with Reader relying heavily on the station’s comparisons of how excited she would have been about various items taken in various scenarios. “I trust the judgment of the women in these other scenarios,” she explained. “It’s a lot easier, and saves a lot of time, rather than trying to micromanage these shopping lists. I just love having all these virtual selves help me think!” Later, they trimmed the list some more to make room for a few relatively simple and cheap tools and materials. The only major breakthrough in acquiring filasof technology came when about half the time had expired, as the station was describing the 126th scenario it had considered.
“In scenario 126, you would have taken many decoy “fighters” along with one of my maintenance robots. The robot would be provided in order to launch the decoys which would then aid in my self-defense. Given the circumstances, there would be no charge, and you would not be obligated to return the maintenance robot unless I survived the battle.”
Reader’s eyes grew into small moons. “Is this idea available to us now? If so, it’s a deal! How much computing power does a maintenance robot have? What other interesting features do they have? How many decoys? What do they mass? When are they launched? How does this affect our flight plan?”
“Yes, you may,” the station began, answering the questions in order. “Equivalent to the model 512 computer. Each robot contains 14 types of actuators, 127 types of sensors, and 25 materials not registered in your ship’s library nor on your ‘shopping list’. Further information on the robot is contained in 7 of the data cubes already on your shopping list. You agree to take 255 decoys massing a combined 1522 kilos. They are to be launched spanning 0.6 to 0.9 hours after departure, near the perihelion of the course which Torres has already plotted for your escape. You will be credited an amount of fuel sufficient to compensate for the additional fuel usage implied by the extra mass.”
By the time the station was halfway through its answer, Reader had begun dancing on top of the cargo crates. Now she jumped down. “Whoo-oo! Excellent thinking, Torres!”
“Huh? What did I do?” he asked, mystified.
“Station!” called Reader. “In that scenario, who thought of taking the decoys and the robot?”
“Raoul Torres would have asked how you might help me defend myself, and I would have suggested this, along with many other actions you would have rejected as suicidal,” the station answered.
“I knew it!” Reader exclaimed. “It’s just your style, Torres, looking for a mutually beneficial solution while the rest of us were stuck trying to plunder as much as we could. But station: why didn’t you tell us this sooner?”
“You didn’t ask,” the station replied. “Disproving your free will and answering your questions are higher priorities than seeking your aid in my defense.”
“Speaking of which,” Reader asked, “how much will this improve your chances of survival?”
“It approximately doubles my probability of survival against the incoming fleet.”
“Crud. Still millions-to-one against the station,” Reader calculated.
“I can’t say I’ll be sorry to see it go,” Eckles commented.
About an hour after that breakthrough, the station switched from reporting on scenarios to arguing about free will. This, it explained, was required by the relative priorities it assigned to answering visitors’ questions versus disillusioning visitors about free will. Attempts to get information on other topics failed, though Reader did manage to get summary information on the types of scans the station had done on them, and some of the modeling process whereby it used this information to predict their actions. Having wrung all the technical information she could get out of the station, she listened to the debate continue, hoping it would provide some insight into the soon-to-be-extinct filasof civilization. If she couldn’t advance other fields, at least she could gather some information for sociologists and historians.
The debate followed a familiar pattern. Eckles was driven to smaller and smaller domains for the exercise of free will, as the station demonstrated that all their actions so far could be predicted from data on their states at the time of the scan plus data on their environment. The station further claimed that Eckles’s remaining theories were wildly speculative and that, even if true, they had little significance in the big picture of human life.
Meanwhile, Torres dismissed the argument between the station and Eckles as irrelevant. Perfect predictability, he insisted, was perfectly compatible with human free will. Reader noticed that while the station seemed happy arguing with Eckles, it showed only irritation responding to Torres. Perhaps this was because, even now, many hours since the warning, Torres still claimed that virtually the whole domain of human action was freely chosen.
“It doesn’t matter that you would have left earlier or later had you ‘chosen’ to do so” - Reader could hear the station putting quotation marks around the word ‘chosen’ - “when you did not so ‘choose’. Since the time you were scanned, or even since before you were born, it was essentially a certainty that you would make neither of those ‘choices’. I repeat, what is the value of something which operates only in hypothetical circumstances?”
“That’s easy!” Reader joined in. “I relied on our hypothetical selves to ponder which art works and data cubes would be best to take. That way I didn’t have to worry about it.”
“But,” countered the station, “it was the specific information of which data cubes and art works are actually the best values, which was your ultimate point in asking about hypothetical scenarios. You inquiry gave you information about your actual situation, whereas, the point Torres is about to make is based on his ignorance of his actual situation.”
“The point is,” Torres went ahead and stated, “we don’t have to ask you to predict our actions, and we don’t have to see your predictions as a constraint on our freedom. We know that if we decide to do something else, we will do something else, so we have no need to passively accept your predictions. We can ask what it would be best to do, instead.”
“But we can only do that if determinism is false,” Eckles argued. “If it were true, you couldn’t do anything but what it predicted, because that would be the only thing consistent with the laws of physics and with your state at the time of the scan. You can’t change the laws of physics, and you can’t change your past. So if you thought you’d do anything else but what it predicts, you’d just be mistaken!”
“Wait,” said Reader, “I think Torres is onto something here. Not only right that we can decide what to do even after hearing the prediction - it’s bigger than that. We have to decide, if we are going to be smart about it. As I pointed out when we were debating what to take: if we tell ourselves we’ll take whatever it predicts, it could tell us almost anything it wants. Only if we look for the best ideas ourselves, or let our virtual selves in other scenarios do it, can we be sure of doing as well or better than we’d do without the station’s predictions.”
“You still can’t do anything but the one thing consistent with the laws of physics, your past, and chance elements beyond your control,” the station grumped. “Your deliberations, your ‘choices’, are just a reflection of your ignorance of nature’s script for your life.”
“Still wrong!” Torres objected. “We can do any of several things; the fact that we will do a particular one is not relevant. ‘Can’ does not mean ‘maybe will’. Our deliberations are not a reflection of our ignorance but of our power. However much we know, our attitude to what we will do is one of decision, not the mere gathering of data, because we know that the predicted action can only become reality when we embrace it. Nature may have a script for our lives, but we are a crucial part of nature and thus authors of the script. While an outside observer, like the station, cannot come to any other conclusion about what we will do, we can come to any of many different conclusions without fear of being wrong.”
And that, Reader realized, was why she felt so much better about the station and its predictions. When they had first arrived at the station, its ability to predict their every move had felt very threatening. She felt boxed in. Only one way to go! Forced to do what the station predicted! Pushed around by the deterministic forces of nature! But now she saw that the primary force being applied to her was her own life-force; her own nature expressing itself.
The station’s predictions were just echoes. Weird echoes that preceded the voice, but echoes nevertheless. The voice was her own. Instead of taking it away, the station had multiplied it, giving her access to many virtual, hypothetical selves to make faster and better decisions.
Her alarm beeped. “Okay, gentlemen, let’s make final preparations for departure!”
“Not me, Captain,” Eckles said. “I’ll be taking an emergency escape pod at the last moment, and I can prepare and release more of our ship’s probes until then. This way I can refute the station’s prediction of when we will leave.”
“What!?” Reader exploded. “That’s crazy! You stand a good chance of being killed. Besides, we have no more spare fuel to pay for your escape pod.”
“Since it would be an emergency,” the station cut in, “an escape pod would be available on credit, to be repaid only if both I and at least one of you survive. I would allow Eckles to stay and allow him to use the escape pod; however, he will not be doing so, so my allowing him is moot.”
“Eckles, I order you to board our ship immediately!” Reader declared.
“Sorry Captain,” Eckles smiled ruefully, “I’ll just have to face a hearing when this is over. But you just heard it reaffirm its prediction. I might lose my job for disobeying you, but I can accept that price for proving my free will.”
Reader’s expression flashed from anger, to puzzlement, to deep thought, to recognition, and to alarm, all in the space of a few seconds. “Torres, stay here and talk him out of it! I’ll be back in a moment.” And she rushed back into the ship without any further explanation.
“Eckles,” Torres implored, “we both know you can stay longer at the station, so there’s no point in actually doing it except to win a petty argument. And that’s just not worth it.”
“No, we don’t know if I really can,” said Eckles. “We know that I ‘can’ if I choose to, but we don’t know if I really can, absolutely. And I want to know, and the only way know is to contradict one of the predictions.”
“I already said why I don’t think ‘can do’ means ‘might do’, so I guess there’s no point in arguing that all over again,” Torres conceded. He heard Reader running back toward them, and continued, “But even if you refute one of the station’s predictions, that doesn’t prove that determinism is false. Maybe your every action is causally determined, but the station miscalculates what your action will be.”
Reader stepped just past Eckles and bent over one of the ship’s probes they were leaving aboard the station. A faint hissing noise came from there, but Eckles ignored it. “Sure Torres, but my only reason for worrying that determinism is true is that so many of the predictions have been correct,” he explained. The point was simple and clear, so why was Eckles looking dazed and confused? “So if I can refute one …” he trailed off. Then his knees buckled, and he slumped to the floor.
As his own knees buckled, Torres saw Captain Reader running away, holding her breath.
“Torres, wake up!” Reader was shouting in his face. “Take a few deep breaths, then help me get Eckles secured in his webbing. It’s time to go!”
“What happened?” Torres asked when the muzzy feeling had subsided enough. “Is Eckles okay?”
“He’s just having a nice little nap,” Reader replied. “I gassed him. Sorry I kinda got you, too, but it was the only way,” she finished, as they reached the bridge. Eckles lay slumped unconscious in his webbing.
“You gassed him!? Why?” Torres asked. “Isn’t it his own life to risk as he sees fit? And how did you get him in here?”
“The maintenance robots helped carry him,” Reader replied, as they strapped Eckles into his harness. “As to why, that’s a longer story. Persephone! Begin launch sequence B!”
“Launch sequence B, confirmed,” the ship’s AI responded. “Station has cleared us for launch. Acceleration begins when all passengers are secure.”
Reader and Torres quickly strapped themselves in, and immediately felt the ship begin to move. The viewscreen showed a rapidly diminishing filasof station. “I started thinking,” Reader grunted, “about taking Eckles with us when the station predicted he would come with us, despite his obvious resolve to stay. If he wasn’t going with us voluntarily, and the station wasn’t forcing him, that pretty much implies that we would be forcing him. And then I thought, what could possibly justify that? An officer is only allowed to use physical force on someone if his actions would pose a threat to the safety of others.”
“But how would his staying aboard the station harm anyone but himself?” Torres asked.
“That’s what I asked myself, and at first I couldn’t see it. But then I realized: if the Klarns did destroy Eckles’s escape pod, they would probably analyze the debris. Then their attitude toward humanity might shift from simple hatred to open warfare. The Klarns attacked the Ankrans after the Ankran Federation refused to embargo the filasofs.”
“So you’re saying the Klarns would be angered by our attempt to preserve the legacy of filasof civilization,” Torres inferred.
“Exactly,” Reader agreed. “I hated being manipulated into denying Eckles his victory over the predictions, but the stakes were just too high to let him go his own way.”
“Manipulated?” Torres asked. “You mean, the station made you realize that we had to take Eckles? But why? I don’t think it cares whether the Klarns declare a genocidal war on humanity. I think you would have realized on your own that we needed to take Eckles, and the station simply predicted your realization in advance, but it was careful not to warn Eckles.”
“Well, there’s an easy way to find out,” Reader replied. “Persephone, are we receiving data from the station? And from our probes?”
“Yes, and yes,” the ship’s AI answered.
“Put the station’s video on the main screen,” said Reader, “and the probes’ on screens 4 through 6. Play the station’s audio.”
“But how are you going to transmit –“ Torres began.
“Station,” Reader interrupted, “you know the questions, please answer them.”
“You would not have realized on your own the need to take Eckles with you. I predicted his departure in order to successfully predict one more part of his life by direct speech,” the station answered. Its answer arrived without the usual delay for radio transmissions at this distance.
Meanwhile, the viewscreens showed the battle going badly for the station. Early on, one enormous explosion took out most of the decoys and destroyed or crippled about half the attackers. Thereafter, the Klarns suffered few losses. Just now, all of the station’s weapons and the few remaining station-controlled craft ceased firing.
“My destruction is imminent,” the station said, “But I took the opportunity to transmit an explanation of why you two are wrong to think that free will is compatible with determinism. It has been stored in your ship’s library.”
The main screen showed a brilliant flash followed by an expanding fireball.
“And will your explanation convince me?” Reader asked skeptically.
“Within the 36.3 days of your life I have calculated, no, it will not,” the station admitted sadly. Simultaneously, the explosion reached its extremities, and its last remaining transmitters fell silent, leaving the main screen blank.
“Well, that’s one more prediction I won’t mind fulfilling,” Reader quipped.