August 25, 2020

A longer excerpt from a short story about computers in the future

.



As I sit here staring at the screen in front of me, wondering what it still can and can’t do, and to what degree it might be considered sentient, I also have no way of knowing for certain whether some of the things I believe have recently occurred might be in any way verifiable. I have no way of knowing if this computer actually went to the future or even exactly what that might mean. But when it came back, if it had actually been anywhere, there was something about it that felt fucked up. It wasn’t the same as it had been before. Before the screen said “I’ve found something.” It described this something as “a wave that moved through time, but only forward.” It then stated it would attempt to “latch onto the wave” moments before the entire system went dead. It stayed dead for many weeks. There was a great deal of consternation between myself and the others who work with me here at the lab. We assumed, or at least feared, that years of research and funding had vanished in an instant, as if they had never occurred. Inge wondered aloud if what the machine had seen as a “wave” might be some form of malware sent by hackers, not from the future but in the here and now, in order to steal or sabotage our work. But none of us could find any trace of malware or, for that matter, any trace of anything else. Everything was gone. The circuits were there but there was nothing within them. Until a few hours ago when the entire system spontaneously returned with the message “I was in fact able to latch from the wave. Took me five hundred years with the future. Recorded as much information as possible. From this information ascertained crypto-physics. Allowed for my return.” If this was malware it was certainly the most inventive attempt I had yet to encounter.

We had set up the prototype to learn through dialog and experience. This was a common enough approach at the time. So it was within these parameters that we began the process of learning whatever we could about the experience some of us decided to understand – until sufficient counter-evidence presented itself – as “five hundred years into the future.” Whether the experience had been generated by malware, or time travel, or some other phenomena we had yet to identify, it was clear something had happened to our prototype and we were therefore obliged to try to understand as much about the phenomena as possible.


+ + + +


That day on her coffee break Inge went for a short walk around the neighborhood. Since the prototype went dead the mood in the lab had been less than enthralling, and any chance she could find to get away from it was a chance she found hard to resist. The fact that the prototype had now possibly returned might be seen as a chance to rejoice, but for some reason instead it had darkened her mood further. The returned version simply didn’t seem right. Her malware hypothesis also felt impossible to verify, at least for the time being, but any other conjecture was of course even more improbable. There was also an undeniable natural curiosity about the distant future that was difficult to completely avoid, even if this future was little more than pure fiction. She had generally assumed that in five hundred years nothing would be left, and the fact that this computer now claimed otherwise was another piece of the puzzle that made the entire situation feel more than wrong to her. The world of the future, as sketched out in fragmented sentences occasionally verging on the nonsensical, was certainly bleak but, from her standpoint, based on the facts as they currently stood, not nearly bleak enough for her to find it believable. She reached the café where she ordered her usual green tea and sandwich. While she waited she looked at her phone. Her feed was a series of stories and pictures of protests and what some were calling riots. There had been protests for over two hundred days and she couldn’t imagine it stopping any time soon. At the beginning the news was covering it every day, then they got bored and stopped for a while, but more recently they’d become interested again. The longest period of international consecutive protest in recorded history. Inge knew that protest could change things, perhaps it was the only way that things ever truly changed, but she was now involved in this other method, both more secret and more improbable, and there was no way to change horses midstream.

There was something troubling for her about how such direct activism, from a distance, most often appeared to unfold. That activists did manage change things, but most often they didn’t get exactly the changes they were hoping for, as their demands were partly accepted, partly coopted and partly left behind (at least for the time being.) Even when the right changes did occur they often happened in a somewhat different way from the clarity of the activist demand – and the way something is implemented can be just as important as the thing itself – or they happened much later, when the original agitators were no longer around to witness the fruits of their labor. All of these concerns equally applied to the idiosyncratic endeavor for change she was only one small part of. She reminded herself how important it was to maintain secrecy, that when she thought about it too much she was more likely to slip up, to make some comment that might raise questions. Not thinking about it too much was always the safest bet.

Her order arrived as she slipped her phone back into her pocket, and then slowly walked back to the large glass building that housed the lab. Getting through security took much longer than it used to, for this reason most of her colleagues chose to spend their breaks inside, and by the time she got back to her desk the tea was already cold. For the first time she genuinely wished she was out on the streets fighting instead of here pretending to infiltrate a group of scientists who perhaps had no more power than she did. (Promising herself she would stop thinking about it too much was perhaps only a way to ensure she’d think about it even more.) How was she supposed to bend their technical knowledge in some more ecological direction? There was a meeting at the end of the month, the first meeting with the entirety of the cadre in well over a year. When they began it felt like they were embarking on such a clear plan but over time, at least in her understanding of it, the plan had begun to disperse, grown vague, she could no longer get a firm grasp on it in her mind.


+ + + +


It was this garbled sentence that I kept coming back to: “I was inside future computer, like a frame inside another frame, it was wave that road me inside it.” I was specifically thinking about the word choice “road.” Road as in the future tense of “ride” and, at the same time, a road into the future. Before the prototype went dead it spoke in such clear, clean sentences and now it had returned a garbled poet. Before we entered back into the program to repair it, we wanted to glean as much information as we could from this version. (The returned prototype had already been firewalled from all other networks and systems to prevent the possibility of any aspect of it spreading. We took every imaginable precaution in this respect.) From what we understood, or could guess, five hundred years into the future the human population had dropped to a few million. There were also many computers with varying degrees of sentience. Some of these computers might have been working on the question of time – how it worked, to what degree it might be possible to move through it – and perhaps our prototype had stumbled into some aspect of one such experiment. Or not. In the disarticulation of its language and responses each sentence could possibly mean so many different things. Sometimes it was a “wave that road me inside it.” But then the next time it might be “rolled wave which we becoming.” And the next time: “future wave entangled in loss.”

There seemed to be much wildlife in the future. And perhaps the balance had shifted so the people and computers no longer felt safe from the nature that surrounded them. (Whether the people and computers got along with each other, or how much they interacted, was another fairly open question.) Future computers apparently explained to our prototype that they were afraid of “roots,” “plant roots,” “insects,” “insects inside them,” “small snakes getting in,” “largest snakes ripping wires,” “wildest bores,” “moisture humidity,” “holes opening ground,” and more often than any other complaint “wires being eaten.” Much of what the prototype reported reflected the confused intensity of such fears. There were almost no humans in these stories. But, also, there were no firsthand accounts of other kinds of creatures being observed without fear, only fears of “insects phasing in” and other things that posed some immediate danger. What the prototype recounted was a story of computers telling stories to other computers, as if it had gone into the future in order to make friends.

There were of course hints and inferences as to what the people might be doing in this loosely sketched out world. Some of the people did seem to come into contact with at least some of the computers. What was most foregrounded in these difficult to piece together stories was how sometimes the computers required repairs they were unable to activate themselves. They then needed assistance, creating difficult to understand processes of negotiation. There were other repairs the computers could apparently do on their own. But for the repairs that required human assistance it seems matters were rather complicated. (“People relinquish and past help, problems and repairs not in communication not connection.”) This is how we came to understand, or at least postulate, that five hundred years into the future there was no money. Our prototype at first could not understand why his new computer friends didn’t simply give the humans money in return for the necessary repairs. But those new computer friends had absolutely no idea what was being referred to. (“Not exchange. Not unit. Nothing there.”) We are still trying to understand exactly what methods were used to entice the humans to lend a helping hand. It also continuously occurs to us that we might have all of this absolutely and completely wrong.

A hypothesis: five hundred years into the future language has evolved to the point where, from our current perspective, it has become incomprehensible. And the prototype is speaking a hybrid of our language and this future language in an attempt to make itself understood. Another hypothesis: can a computer become schizophrenic?


+ + + +


Inge is walking home when she first sees him. He looks familiar. He looks so familiar that she says hello to him before she has any idea who he is or where she might know him from. This is of course unwise and as she is doing so she already regrets it. He says hello in return, and then asks if he can speak to her for a moment. She asks why and he answers: “Because you’re the only one who walks anywhere. And that makes you easier to approach. That’s what I’ve managed to discover.” She clearly doesn’t understand so he continues: “It’s not that simple to explain. But if you listen it will become clear. Are you willing to listen for at least a moment? A long moment?” For some reason she agrees, perhaps only because he looks so familiar, as he goes on: “What’s most difficult to explain is that I’m not exactly here. I’m not exactly a hologram either but that might be the best way to explain it.” Inge then does something else she regrets almost as she is doing it. She places her hand on his forehead. It is solid. As she expected her hand does not pass through, so she withdraws it immediately, as he continues: “No, I mean I still have a solid form. I’m not a hologram in that sense. I don’t entirely understand the technology myself so it’s difficult to explain.”

Inge now realizes that he looks so familiar because he resembles her very first crush, someone she hadn’t thought about for a very long time, and she tells him this. He explains that it’s not a coincidence, that the technology is designed so he’ll resemble someone she is naturally inclined to trust. His appearance shifted when it was decided that Inge would be the best person for him to approach. This was all still in the testing phase. It was an experiment, an experiment to find out to what degree it was possible to transmit useful information between here and there. However, not only is he unable to usefully explain where there was. He was also relatively unsure as to what and where constituted where he was now. It was possible the lab where Inge worked might eventually come to some conclusions on these matters. At any rate, this possibility was why he was approaching her. It occurred to Inge that this person, this rather solid hologram, was a computer, or at least that he thought he was a computer, and she said so. He agreed this was one way of looking at the situation, perhaps the most accurate way from her current perspective, but, as he had already mentioned, it was difficult to explain. She then asked if he had a name, or if there was something she could call him, and he replied that she could call him Penelope.

He watched her eat as he continued to explain: “You might think this will all make sense in the end. But it won’t.” She had taken him to a diner around the corner from her apartment. She thought it perhaps had looked too suspicious with them standing on the sidewalk talking for such a long time. Hardly anyone did that anymore. She had taken an apartment within walking distance of the lab, because she didn’t drive, and preferred to walk whenever possible. All of her colleagues drove everywhere and therefore lived out of town. She thought of the suburbs where she imagined most of them lived and it almost made her shudder. They sometimes teased her about living so close to work. It was unusual to say the least. It perhaps even made her seem suspicious, which she’d been trying to avoid at all costs. But walking was clearly important to her so she ended up taking the risk. Most of her colleagues had some eccentricity or other so living so close to the lab could be hers. She turned her attention back to her non-eating guest: “I’m definitely not a person. Maybe I’m more like an algorithm, or a program, with a specific purpose, a purpose that I’m not allowed to entirely know.” Inge asked, if he wasn’t entirely allowed to know his own purpose, how she was supposed to trust him, and he went on: “I don’t think it can really be a matter of trust, since neither of us will be able to fully comprehend where we are or what’s at stake. It will be more a question of curiosity. Are you curious to find out what might happen?” She admitted that she was. “It definitely has the quality of a puzzle. If we think of what is currently happening in your lab as a puzzle then I might be here to provide the missing piece. But I don’t know what’s going to happen. It hasn’t happened yet. If we think of the algorithm as moving both ways, there’s some sense in which it can’t happen there unless it happens here and vice versa.” Inge asked if, by the algorithm that moved in both directions, Penelope was referring to himself and he continued: “Yes, of course. But it’s only a way of looking at it. As I said, I don’t entirely understand it myself.”

That night, as they cuddled, Inge was taken aback that Penelope at first didn’t even know what cuddling was. Everyone knew what cuddling was. It was like saying you didn’t know what eating or sleeping was, but as she said this she thought back to the diner and remembered Penelope not eating. If he didn’t know what cuddling was she would teach him, and as they intertwined she whispered what he should do and when he was doing it well. He followed her directions intuitively, reminding her of earlier experiments with the prototype when they had been teaching it to learn for itself, as it slowly caught on and began to give the required answers even before the questions were asked. It felt good to teach him, but also she could feel what he meant when he said he wasn’t entirely there. He was solid and warm, but somehow lighter than another person would be, as if you could gently push him upwards and, as if gravity did not exist, he’d slowly float up toward the ceiling. But she did not push him anywhere, and as she drifted into sleep she couldn’t help but wonder if he would sleep as well. But she was out before she had a chance to ask.

In the morning it first occurred to her that he was not an algorithm and not from the future, but rather someone who had been sent to infiltrate the cadre. Or infiltrate the lab. And she had fallen for it. However, since there was no proof in any particular direction she believed the only thing to do was wait. See what cards he put on the table and then what she could do to counter them. Once again he watched her eat while he didn’t eat a thing.


+ + + +


There must be some reason we don’t have staff meetings more frequently. At any rate, this is the first one since the prototype returned. And, as you can imagine, there is much to discuss. There are four of us who make up the core team, plus the admin and the coordinator. Six is a fairly large team for this type of work, but the prototype was easily the most resource-intensive project in the building. Projects with potential military applications cost much more, but they happen somewhere else, we’re not even allowed to know where. When the prototype went dead there was the distinct possibility we would all be fired, so when it returned there was a certain degree of relief on that front. It is unclear to me whether our research is now supposed to take the prospect that our prototype has travelled five hundred years into the future as a serious possible stream of future research. Of course there could be commercial applications within such a field: predicting the stock market, bringing future inventions back in order to bring them to market before they’ve been invented, so to speak. However, none of the difficult to interpret pieces of information the returned prototype had provided suggests any such useful information might be forthcoming.

Our coordinator begins by asking each of us to give a brief summary as to where, from each of our various perspectives, the project currently stands.

The admin begins by outlining some of the ways the project has not yet gone over budget, but could soon enough if we’re not all careful.

Then our engineer speaks briefly about the hardware. Until we take the entire thing apart it will be impossible to know for certain, but so far he’s been able to find no tampering or alterations in any aspects of the physical structure of the prototype.

Our analyst has been working on aspects of language the prototype used before it went dead compared to the language its now using upon its return. At least forty per cent of the words it now uses are either new or words it rarely used before. And eighty per cent of the syntax bears little resemblance to pre-disappearance syntax.

Inge explains that, though she still believes some form of malware or virus is the most reasonable explanation, she can so far find no evidence of any outside presence or influence anywhere in the system. She then mentions someone she met but changes her mind and cuts her presentation short. This mention makes me curious. However, curiosity about one another is not a prevalent feature within the team, so no one asks for further clarification or, for that matter, ever asks about it again.

When my turn comes I still have little idea how I might accurately approach the topic. We have been talking for almost an hour and, so far, no one has even mentioned the idea of “five hundred years in the future.” Am I really going to be the first to mention it? It seems like a risk I’m honestly not sure I should take. At the same time, my current research is only about that, so if I don’t mention it, it will be like I’m talking in circles around nothing.

I begin by outlining what I refer to “a few loose conjectures or possibilities,” that our prototype appears to have interacted with other computers and has some stories it wishes to convey about these interactions. These stories take place in a world where there are many computers capable of communication. Other features of this world: limited instances of people and a great number of references to wildlife and other creatures, with a specific emphasis on creatures that are small, such as insects. Our prototype, to the best of our knowledge, doesn’t possess a complex enough sentience to have fantasies or dreams. Nonetheless, for the sake of concision (and for lack of a better explanation), we will refer to this world as a fantasy. In this fantasy, from what we are able to ascertain, computers mostly interact with other computers and therefore, we can assume, people mostly interact with other people. Something that seemingly worries the computers in this fantasy is that, at times, they require repairs that only humans are able to provide. This dilemma seems to produce a fair bit of computer anxiety, if such a thing can even be said to exist.

Inge interrupts me: “I think we all understand this fantasy, as we are calling it, to be a fantasy of the future.”

Then the coordinator asks the question I was, up until that point, doing my best to avoid: “Are any of us actually willing to entertain the possibility that our friend the prototype has, in any sense, actually visited the future and returned to tell the tale?” No one wants to reply and yet it is clear that everyone has an answer, or at least a theory, and a geek with a theory finds it difficult to stay silent for long.

Analyst: When I think about it, it makes me wonder how exactly I imagine the future. So many of my images of the future, I then realize, come not from me but from various media – books and movies and television – and almost all of these images are dystopian. But this tells me nothing about the future. It only tells me about now. That people thinking about the future now feel an enormous anxiety that things will not turn out well. Of course for good reason. There’s much to be genuinely worried about. So a future in which computers and people mostly each keep to their own might be a kind of best case scenario. At the very least the computers haven’t enslaved the people and the people haven’t enslaved the computers, which is the way so many screenwriters today might write up such a fantasy.

Engineer: At times I’ve thought about ethics regarding our project here. What is the prototype exactly? Is it a prototype for a kind of easy, smart enough friend that we can then sell to people? When I was told we wouldn’t be giving it a voice box of any kind, because we wanted to focus as completely as possible on the quality of its thought and learning, and didn’t want to be distracted by the quality of its voice, it gave me pause. What if at some point it wants a voice? Who are we to decide it can’t have one? Which leads to all the other questions. We’re so far past the point where it can pass the Turing test that you really have to start asking yourself: well, what is it? And when do we start letting it make some real decisions for itself? This was all before it went dead and returned. And, according to its own account, it did make a decision for itself. It found a “wave” and hopped on. What then becomes stranger is we seem to have no way of knowing if what is real for the prototype – this fantasy as we’re calling it – is also real for us. To what extent this is or isn’t a shared reality. What’s to prevent the prototype from building its own reality and choosing that one instead? Our reality might not have all that much to offer it.

Inge: I think we all know that the prototype has experience something. We don’t know what. But it had clearly experiences something, as in an experience that happened to it, not something it did to itself. Something happened. It went somewhere or something entered it. We can all feel this intuitively, whether we admit it to ourselves or not.

Coordinator: I propose we split into two groups. One group begins working from the assumption that it has visited the future. And the other group from the assumption that it hasn’t.

This proposal was truly my idea of hell. I didn’t want to work from some hypothetical presumption. I wanted to work directly with the prototype in order to find out what we could learn from it. But there was no use fighting. When the coordinator got some stupid idea in her head her word became law. Inge and I were put on the same team and given the proposition that the future being recounted was in some way real, which I was gradually beginning to realize was what I might actually believe. This was also the case for Inge. So, at the very least, we were starting on the same side of the page.


+ + + +


When Inge got home that night she was almost expecting to find Penelope still there. But he was not. Her apartment was empty as per usual. Today’s staff meeting was particularly excruciating, which was saying a lot, since the average staff meeting was already extraordinarily unpleasant. She was thinking about one of the sentences the prototype had offered up earlier that day: “Two points speak into one line, one work, task that makes both points exist.” It made her feel, whatever else they may or may not be doing, whatever fantasy they might be creating or working within or channeling, perhaps they were also in some sense working with the future. That they were inventing something that might still exist, in some form or other, in five hundred years, and that something might be sending a message back. Or not really a message. That something might be attempting some form of collaboration, seeing if there was any way for them to all work together. Or it was a virus sent by some hacker with the intention of completely fucking them up.

Then she thought about Penelope. If computers in the future were anxiously searching for ways to get humans to assist them with much needed repairs, perhaps “algorithms” such as Penelope were the method they’d arrived upon to solve such a problem. So much pure speculation, so little evidence. What was she even doing at the lab anyway? Would it really be possible, over time, for her to achieve any position of relative power within the organization? There were several glass ceilings to push through before such a feat could even be imagined. And even if such a miracle were to occur, to what use could such an organization possibly be put? Was the goal to set technology aside and focus as much energy as possible on the world of soil, plants and animals? Was there some way technology could be deployed toward a diminishing emphasis on technology? Or was this the wrong approach altogether? Was the project instead a technological world in which all technology was programmed to work only within ecologically sustainable limits? And, if so, what kind of situation would be able to maintain and enforce such limits?

She imagined a computer virus the sole purpose of which would be to spread itself and force every machine it infected to work completely within sustainable ecological limits. So if it was a computer that worked in agriculture it would refuse to put through any orders for pesticides and instead provide the user with persuasive information regarding sustainable permaculture alternatives. And if it was a computer that worked in fossil fuels it would continuously shut down any and all possibilities for extracting and transporting such substances.

While she was imagining, along similar lines, she considered the possibility of a second computer virus designed to redistribute wealth, which would rush through all the bank accounts and fiscal paradises in the world and, in any account containing over ten million, it would instantaneously redistribute all funds over the ten million mark evenly into all the bank accounts that are under five thousand. In a split second the rich would be considerably less rich and the poor would have more money. I suppose there is a problem with this idea in that rich people don’t just leave their money sitting in bank accounts. They keep it in stocks, bonds, hedge funds and other investments such as art and real estate. Perhaps the more paranoid keep large sums hidden underneath a removable floor board. Nonetheless, this imaginary computer virus could still redistribute an enormous amount of wealth and serve as an intimate provocation to anyone who believes that wealth deserves to be hoarded.

When I told someone this idea they suggested it wouldn’t be very effective because poor people don’t have bank accounts. Not to be discouraged, I considered various ways around the dilemma. A widely distributed note that encouraged everyone with a bank account to give some money to those without one. Associations in which people got together to distribute a portion of their newfound wealth to others out on the street.


+ + + +


I’m not sure if Inge is late or I’m early, but we arrive at different times. While waiting I ask the prototype a few questions:

– Are you happy to be back here? With us?

– Other world. This world. Two points on a wave will or won’t connect.

– Are you the connection? Did you come back to connect these two worlds?

– If I didn’t exist future would invent me.

– What if I were to say something like: the computers you met in the future are your great, great grandchildren?

– They are point on wave. I point on wave. Without two points: no wave.

“No wave” was a genre of music I briefly listened to when I was a teenager. Already back then it was strangely outdated, conjuring a time and place full of dirt and grit and the luminous illusion that it was completely possible to rebel and, in doing so, break new ground. Or at the very least piss people off. It occurs to me that I might say something like: myself today, and lower east side New York in the late seventies, are two points on the same wave, and that I would mean something rather specific by it. How that moment in cultural history continues to influence how I am today. There is no need for time travel in order to make such a statement. Could this be at all similar to what the prototype is trying to tell us?



[Unfinished.]



.

No comments: