Fledgeling Discussion 01

Download Audio (41 MB)
If you’re interested, here’s the raw live recordingΒ with an extra twenty minutes of random chatter and dropping calls.

NOTE: The following is a dialog between a couple of friends of mine. I’ve talked with “A” extensively on the topic of Fledgeling, and he’s probably the world’s leading expert at this point (Certainly above me. I may be a “visionary”, but half the time I can’t concoct two clauses to rub against each-other). Participant “Q” is also an old friend, and a vetran of industry-critical software development, as well as high traffic web design. The dialogue has been edited for clarity, and I have inserted notes where appropriate to offer clarification. On the whole, I find this discussion fascinating and enlightening, and I hope you do as well. Enjoy!

Q: Do you think the whole “glorified social experience” is just a sort of fad that might pass? Or do you think games will continually develop that way?

A: Both yes and no. An interesting phenomenon to which crowd sourced funding and increased availability to starving developers has given rise is the rise of new, usually-single player, original game experiences. Things like Minecraft and Kerbal Space Program and Dwarf Fortress. At this point they’re generally focused on exploring an idea. Minecraft is basically “will people play a game that isn’t really a game?” in a java app. KSP is “will people play a game where the only rules are Newtonian physics?” Dwarf Fortress is “will people value integrity in a simulation over presentation?”

And all of those games (and many more) are getting pretty loud “yes” answers. Things like that are risky ventures that can’t survive in the AAA market, and that would suffer significantly in the face of pressures from marketing and long-term business strategies. Trying to incorporate microtransactions as a business model would have held back all of these games. Being forced to release… you know, ever, would have held back all of these games. Losing their dev staff on that release day would have killed them. But, they are showing us that people actually do value gameplay and integrity of simulation. Both of which have been waning arts in face of the “MOAR POLIES ZOMGZOMG” console races. (or whatever the crap makes graphics better these days)

Q: You actually bring up a subtle point. “More polygons” is always a desired thing but the mantra of more recent architectures has deemphasized geometry and put more focus on pixels/shading for the very express purpose of making simulations better/more realistic.

A: Simulations or presentations?

Q: Well add to that the GPGPU movement and it covers the simulation. In other words, we don’t necessarily need to see more things on screen; but the things we do see we want to look more realistic.

A: I’m not sure what that means

Q: Think about it this way. We care less about how many are shown on screen, but we care more about how they look and what they do.

A: OK. That does sound valuable. Also, don’t misunderstand me; I’m not trying to devalue the immense progress we’ve made in presentation. We’ve come a long, long way from Doom and that’s a really good thing.

Q: I think it slightly ties (at least superficially) to what you and Dudecon concluded. Even the huge game shops are starting to see that showing more polygons isn’t necessarily selling more games. Both Playstation 4 and Xbox one that launched in November are not graphics monsters, they are about as powerful as a mid-range Radeon (7770-7850). But they are compute monsters. I think they are starting to understand that a “pretty world that works terribly” is far inferior to an “adequate world that is far more interesting.

A: Right, and there have been some more encouraging titles in recent years, but it’s going to take major dev houses a long time to start producing “good games” again. Pretty games and fun games, yes. Games that offer valuable social experiences, yes. But “good games” are really hard to do and the development model isn’t set up for it. Unfortunately I’m not sure they’re seeing past “letting people play this game online with their friends using voice chat is the #2 most important feature” (after #1: letting people spend real money inside the game).

Q: True

A: The latest Sim City is a good example of both of those failings. They brutally sacrificed integrity of simulation in order to provide multiplayer. The SimCity games have always been about simulation first, and moving away from that really hurt the integrity of the whole franchise, not just the gameplay quality of the latest game. They also have paid DLC, which is probably not a big problem, but its dev time that would have been better spent elsewhere.

Q: πŸ™

NOTE: This point, of opportunity cost, is a sore one and often visited. Often proponents of high fidelity graphics in games will point out that it increases the player’s enjoyment. What they often fail to point out is the cost in developing the pipeline, assets, and hardware required all detracts from other less apparent aspects. Fidelity of simulation, NPC AI, UI design, play-testing, all of these facets are often made to suffer in the pursuit of other goals (often graphics, marketing, and other “movie-like” qualities) which, while good in themselves, must be balanced against their costs in this unique medium.

A: So, in general, I think that gaming culture will evolve past the current trend of “skinner box makes me feel soo good”, “ooh, pretty…” and “dudes, voice chat and twitch integration?!??!?” as ends to themselves. And, hopefully, past skinner box implementations entirely (though they are so good at making money, that’s unlikely).

But I don’t think it’s going to happen in a way that excludes mobile play. Cloud-based simulation engines are going to become commonplace, and device-agnostic presentations are going to be the norm. Control schemes for mobile devices will improve. My current guess is it will be motion sensing, similar to kinect and whatever else, but on a smaller scale (your hand, instead of all of you), or that it will be wearable (like a glove or something on your other hand).

Q: Do you suspect fledgeling will be cloud-based from the start?

A: The rough intent for fledgeling is to make it a generally peer-based simulation with nested top-down moderation, which could be run locally, on a single cloud server, or peer-to-peer.

Q: Ah… Sounds very complicated πŸ˜€

NOTE: Both A and Q are precisely correct here. Hopefully the implementation will be less complex than it sounds.

A: Yes. πŸ˜›

Q: I hope you start really small.

A: Yeah. Dudecon’s vision for it is very big. But it can’t start there. So, if you recall the fundamental idea beneath fledgeling: Everything is a node. Nodes are fungible and scalable, and generally defined by the sum of their contents.

So, for example, a person is a person. But a society is also a person, because it’s just a bunch of persons. And it’s also matter, because all of those people have matter. That sort of general idea. Also, And this is big on Dudecon’s agenda, everyone simulates their world in their head all the time. You might imagine what you are going to make for dinner when you get home, and recall that you have waffles in your freezer. Well, your recollection of waffles doesn’t actually mean there are waffles. You could be mistaken. But it’s your freezer and you use it a lot, so you’re probably right. So, any node that can carry intent (like a character) will be fitted with all the tools to simulate the entire world, and local images of what they think the world is (whether they are right or wrong) on which to base their world.

Q: Heh! That sounds like a computational nightmare!

A: Yes it does. A much better mind than me is going to have to architect it.

NOTE: The position of Grand Vizier of Programming Madness is open for the right candidate.

Q: Which is probably why no one’s done it yet?

A: Dudecon insists (I’m not sure “insists” is the right verb, but “believes” is true) that, because everything is the same, it is just a matter of finding the solution. Once found, since everything is the same, it will be universally applicable and the whole world will immediately work. I am not so optimistic. But, Dudecon is the visionary, not me. I’m just trying to help out.

Q: The solution meaning the algorithm that correctly defines a node?

A: Yeah.

Q: Well therein lies the rub.

A: In my mind, the basic problem is memory.

Q: Memory and computational resources.

A: No, memory as in character memory.

Q: And cyclic dependencies are going to be a bear.

A: Yeah. I’ve run in to that a bit in the mockups I’ve done. I wound up using some (what I thought was) clever meta code in python to deal with it. But in a strongly typed language like C-anything, basically all of my work on that is close to useless. I’ve had a few thoughts on how to use interfaces to great effect, and the work I’ve done with VOID has been very helpful in that regard.

NOTE: A here refers to his Kerbal Space Program mod which adds helpful GUI and HUD aspects to the game. VOID stands for Vessel Orbital Information Display.

But, back on the simulation: we know that computers can simulate a small world down to a gnat’s ass (literally, there are probably gnat asses in Dwarf Fortress, and if there aren’t you could make them with a change to a text file). So, Dudecon suggests that can scale to a world as large as you like by abstracting away all of the details that aren’t currently relevant.

For example: when a dwarf is uninjured and not exposed to any pathogens, is it necessary to simulate his circulation system? The answer is probably not. And if there are internal effects that would be desirable to simulate (a heart attack, perhaps), the “dwarf” node should be able to bake “chance of failure” out of the circulation system definition and deal with that abstractly.

NOTE: To be fair, Dwarf Fortress likely does suspend simulation and abstract in such cases. The following, however, I think it does not do.

And if there is a failure, you “wake up” the circulation system node and ask it to retro-sim a reason for the failure. So it works backwards from “your heart stopped” to “your arteries were clogged” to “you ate too much salty food” to “dried goat meat is salty”, or some chain of events like that. And then, because in fledgeling time is malleable, if you don’t like that result, armed with this new information, you can go back in time and teach your dwarves not to eat so much dried goat meat. Or you can take a step up the abstraction chain from “a bunch of dwarves” to “a set of dwarven societies” and find whole groups of dwarves that don’t eat so much died goat meat, and use them to build a master race. Or… whatever, basically.

Then, Dudecon envisions a bottom-up simulation with top-down moderation. The “bottom” of the simulation in this case is “the set of nodes that are as large as possible to provide a meaningful experience at this level.” So if you’re playing a dwarf fortress type game, “dwarves” and “wall bits” might be meaningful nodes, but “dwarf hearts” and “silicate particles” might not be. Then each of those dwarves is going to simulate his own world and act inside it. And when he wants to act, he passes that action up to some parent node which adjudicates it against other actions.

NOTE: The above describes well the kind of simple computational path which leads to surprising and interesting behavior when the simulation is allowed to range freely through both time and space. The de-coupling from scale (to simulate the appropriate scale instead of using brute force at a set “fine” scale) and from time (to simulate forward or backward as appropriate) is critical in the fledgeling platform concept.

Q: How do you know what those limits are, though? As a human, it’s easy to postulate but for a computer program to determine it… that would be difficult.

A: Yeah, that’s pretty nebulous at this point. I suspect that we will wind up prescribing a way of seeding that into game scenarios. Fledgeling will come with some default game types. So you could play “the sims” or “dwarf fortress” (which are basically the same game), them zoom out and play “SimCity”, zoom out again and play “Civilization”, and zoom out again and play… something else. You could also zoom in and play “circuit designer” or something like that. So we’ll have thresholds for this sort of thing prescribed in the game scenario definitions

NOTE:I see no real difficulty in teaching the computer to select the appropriate range of scales over which to simulate results. (I may be delusional in this respect.) However, each scenario will certainly need to include or import rules about how relatively “precise” the simulation should be, as this is critical for the consistent functioning of the game-world.

Dudecon also talks about having nodes define “tolerances” for most of their values. “tolerances” for bottom-up interactions, and “shatter points” for top-down interactions, basically. So, if I’m simulating a container box that container box has a bunch of stuff in it. But when I’m actually running the sim I don’t need to do physics on everything inside. Unless my net acceleration is higher than the lowest “shatter point” for the contents of the box in which case, depending on Schrodinger, I think, we either zoom in and sim a break, or we just say “it broke”.

Q: This seems like a metric ton of parameters, even with tolerances

A: Yeah. It is not insignificant. Also, because the math is different so often. Objects are rarely as fungible as Dudecon would like, I think. I have perhaps just not approached things in the right way.

NOTE: It is well noted that the fungibility of objects (in the appropriate orthogonal structure (as discussed below)) is critical to an elegant Fledgeling implementation. Also, as noted, this is not easy to think about, let alone to implement.

Q: At what point are you going to stop subdividing? Is there a smallest unit? Like a fledgeling Planck constant of sorts?

A: There will have to be, but we don’t have a good consensus on how to define it.

NOTE: I agree that there will need to be a “Planck constant” of sorts. However, as with reality, this is a soft limit, not a hard one. It is the scale at which all the quantum randomness makes it literally impossible to distinguish between the things you are handling. I suspect the scale limits in fledgeling will likewise arise naturally from the scenario as soft limits, instead of being hard-coded in as hard stops.

A: So, here’s an example of one scaling-up problem I ran in to. In Dudecon’s thought, materials are defined by density. So “mass” is a derived number, based on how much stuff you have.

NOTE: I’m actually not convinced this is the right way to go about it. Materials are defined by density, but objects are defined by mass. In any case, the description that follows is useful in demonstrating the requirement for rigorous thought when dealing with strongly nested aggregating game systems.

But that doesn’t scale up at all. If I define water, with density 1 (yay metric!) and I have 1 unit of it, then it has a mass of 1. Well, if I have a container that is sparsely occupied with water, its density is derived from its contents not prescribed. You can’t aggregate density. You can’t just sum up all the densities. But other things you can sum up. So I can’t make a generic object with an “Aggregate” function. Because different objects aggregate differently. So then I have to define different Aggregates based on the way math works with this kind of object. For example, “internal volume” aggregates additively. If the inside of a container is defined as a complete set of sub-containers (such as a room in a voxel game, or a box thought of as 6 1 cu. in. cubes instead of one 6 cu.in. extruded rectangle). So “internal volume” aggregates additively but density does not.

Q: Makes sense.

A: So if I ask the box for its mass, it can get its mass from its material density times its skin volume. But then it has to get actual computed mass numbers for all of its contents. And if I ask the box for its density, it has to know not to tell me the density of its material, but to compute it from its computed aggregate mass and external volume.

Q: I see what you’re saying. But you have to always assume a homogeneous density? there is not one single density for the object is there?

A: Well, there is, but that may not be relevant depending on what you’re modeling. It’s not a prescribed value. If I’ve got a box full of dice, It has a fixed density, until I remove a dice.

Q: So, as I’ve been saying, density is not homogeneous.

A: Well, right. If you’re looking for the center of mass, you’re right. If you’re looking for “what’s your average density”, it works.

Q: Ok average density. Or aggregate density?

A: That’s probably a more appropriate term.

A: Anyway, like I said, the math is different for a lot of functions that seem similar, and that makes it hard for everything to be the same. So then we’re stuck trying to classify things into “aggregation classes” so that we can build everything off a few sets of ideas, and that’s not something that I did correctly in my first pass through the last mockup. I’d like to try again, using C# instead of python. Mostly because I <3 C# right now. πŸ˜›

Q: To me, this whole idea sounds impossible in the current form. You are essentially trying to simulate the physical universe using a small piece of the physical universe.

A: It’s a long way out, at any rate. You’re talking about the bottom-up simulation authority?

Q: Yeah

A: Yeah

NOTE: Yeah

… But the real power of simulation authority isn’t bottom-up, but top-down. If you start from top-down, you know what you’re going to end up with. If you start bottom-up, it’s always out of control. This is one of the great strengths of computer games. We can start with our conclusions and work backward, unlike “real” physical simulation where we are beholden to a certain token conformance to reality.

Q: You’d have to distill it down to a much smaller subset of physical properties, and set a much larger “fledgeling Planck constant” for lack of a better term. If you did something like that, I’d say it would be very possible to come up with something.

A: Well… The currently idea does not require fledgeling to be a particular expansive simulation in terms of the properties available. Dudecon would probably like it to be, but intentionally oversimplifying is currently in the design theory.

NOTE: I really WOULD like it to be, but I also like the idea of grabbing a fundamental sub-set and playing with that as well.

A current thought of Dudecon’s goes thusly: Nothing should have more than 12 parts.

Q: What 12 properties does he suggest? Or are those still being determined?

A: That’s not really where he’s thinking, in spite of my efforts. πŸ˜‰

NOTE:I intend to leave the fundamentals largely up to scenario developers. The triplicate Body (the physical universe), Soul (the informational and conceptual universe), and Spirit (the intentional and idealized universe including “personality”) seem a good starting point. We will, no doubt, play around with these fundamentals once the foundation of a really solid Fledgeling game engine is in place.

A: But it would apply there as well. Because of parallel node structures, 12 parts might be really wierd, because those parts may each have 12 parts, and they may be orthogonal to each other, and so appear to be more than 12 parts. So, a person should only have 12 parts, but those parts might include things like “a body” (which could also have 12 parts) and “a personality” (which could also have 12 parts).

NOTE: !

The “body space” and “personality space” are only tangentially related, so we tend to think of the different parts of them as distinct pieces of the whole. So, as siblings, rather than as cousins, basically.

But, that idea does trickle down to even the fundamentals needing to have only 12 parts. So, “matter” as a concept can only have 12 parts, and that will help to limit the scope of what we simulate (really, the fundaments should probably only have 3 or 4 parts.)

NOTE: Agreed! Twelve is the upper limit, and less is usually better. I would shoot for seven as a target number, but as few as three is probably fine. Too few properties results in layering instead of nesting, which is fine too, but tends to result in uninteresting obfuscation instead of deep complexity. Notice here how Q makes the brilliant connection back to the previous topic. This guy gets it!

Q: I am seeing now what you mean about memories. They are a whole different ballgame! Completely independent, but only attached to the matter simulation at certain points. Not completely independent actually. Instead, independently simulated, but attached to (or driven by) certain objects in the matter simulation.

A: Right. Memories and the internal world-sim are really important to Dudecon’s current vision. But, each of those objects also has to be masked by a perception layer. I know a lot about you, but I don’t know everything.

Q: And some things you know are not actually correct!

A: Right. So if I am trying to buy you a birthday present, I might buy something you don’t like, which would be the opposite of my intent, because my information is incomplete and partially incorrect. Also my simulation methods are not perfect. When I’m playing ping pong I can see the ball and the paddle. So in theory my internal view of where things are is pretty good, but my method of calculating where to put the paddle to get a desired effect with the ball is not perfect. So I miss shots sometimes. So we also need ability masks for both simulation methods and action methods, because even if I am right about where to put the paddle if my arms don’t get it there correctly. I will still miss the shot.

Q: Yeah. Even the act of seeing is masked. And that whole “masking” concept adds another big list of completely different properties and math for how they interact with the properties on each object.

A: Right.

Q: ugh… lol.

A: Yup!

Q: wtb quantum computing… or something magical.

NOTE: The above discourse on perception essentially means a great deal of “duplication of information” which programmers abhor. However, I think it is necessary, especially in light of what follows regarding race conditions and multi-threading. However, as to adding a multiplicity of properties regarding perception, I think that it will be simpler than it first appears. Objects are easy to perceive when they are large, simple, slow, and solid, and they are difficult to perceive inversely. All of these attributes will already exist in the simulation, so it seems not too arduous to implement perception on top of everything else. In this too, I may be delusional.

A: So, all the way back to Dudecon’s idea of networked simulation. Since much of the simulation work can be handled by subordinate nodes (I will determine what present to buy you on my own; I don’t need a superior node to simulate that for me), subordinate nodes only need to defer simulation to a superior when their action falls outside their sphere of influence. So, I could move my phone from my belt to my hand without deferring, but I couldn’t hit you with a stick without deferring. Those zones would ostensibly be defined using thresholds and shatter points again… Or something.

Also, some action can be taken without deferring. For example, I can talk to you without deferring, because it doesn’t change the state of the world. It changes what I think and what you think, but not the world. I think you hear and understand what I say, and you think you hear what I say and understand what you think, but neither of us need to defer simulation for that. But, if we want to go change the world with our ideas, we would defer it up, because the world is outside our grasp. (where “the world” is a society or subset of a society, which is just a big character node, and can think and respond to ideas just like you think and respond to mine, it just has a much lower opinion of us than you do of me)

NOTE: What follows below is, perhaps, my favorite part of this conversation. It shows both Q and A being creative and thinking not only about what they are saying, but about what the other is saying, and what they both might think about what they are saying, and the implications of all those things. Pay attention! This is amazing!

Q: Actually, to play devil’s advocate, what if we were both standing on a glacier, and you talking would start a landslide πŸ˜› (or avalanche).

A: Well, I really meant “communicate”, not talk. The physical act of speaking is technically different than the act of communicating. But! That gets into an intent mask (I think I am saying what I mean to say) and an ability mask (I tried to say what I think I’m saying). I doubt very much that Fledgeling will simulate sound power.

NOTE: In this, I think, A is happily mistaken.

But, your point is not invalid. And the technical answer is “that’s a different thing.” Also, that would be handled by top-down thresholds. So, suppose we are simulating sound power. My voice has a maximum power output of X, and I’m talking at 50%. The local environment has cached a sound power threshold of Y. If X*.5 < Y, the effect is not simulated and nothing is deferred.

But, fun corollary: We are on this glacier, and in my mind, it’s a sturdy ice rock and nothing could topple it. But in your mind, it’s a fragile pile of snow, bits of ice, and detritus, and the slightest sound could bring the whole thing down. When you do “sim X*.5 < Y” you will get “true”. When I sim it, I will get “false”. So I will shout, and you will shush me! So then the question is, how do we defer simulation at all? Does the “me” object need to know the real Y and the fake Y? Does the parent subscribe to my sound power events, and discard all those less than the real Y?

NOTE:Β You will never know the “real” things, no matter how close those things are to reality. Also, the glacier will never know if you are making noise unless someone tells the glacier about it. But, the other person will not actually hear you if you don’t tell the glaicer space that the sound of your voice is propagating at some level. I think the subscription case is the more likely of the alternatives offered, but I don’t know, at this point, what form it will actually take in implementation.

Q: That is a good question. It sounds to me like it would need to be hard-coded, at its very core, or at least on some basic level (with larger aggregate objects deriving their threshold from a mix of the hard-coded core values (which in and of itself presents another problem)).

A: Threshold-based event subscriptions is my current notion. Then caching it, and they don’t recalculate it until something triggers a sim of their parts in that property-space.

Q: I still cannot get past how quickly RAM and CPU consumption will balloon around the location which the human player is viewing, and how, even when they leave such an area, much of the RAM consumption will not be returned.

A: Oh yeah, I did a really simple mockup of this in python. I’ve still got it someplace. In my first implementation that tracked history I hit python’s memory limit after like 500 turns. Now, that was because of a leak. I got it to run into the millions of turns with some optimizing. But yeah, simming a lot of complex nodes like characters will get very costly very fast.

Now, Dudecon’s idea also has pretty aggressive ideas for abstraction and retro-simming. So, suppose we’re playing a single player game. Basically everything that’s not in focus right now is not being simmed. It’s being abstracted and handled based on pre-baked percentages and such.

Q: Oh, side question. How do we determine what should be simulated and what shouldn’t? Isn’t that a big calculation unto itself? For example, you are standing in a room. How do we determine, from the set of all objects in the universe, which need to be simulated or even checked for pre-baked values?

A: Yeah, that hasn’t been very well explored at this point. Dudecon’s idea is that only things “in view” get simmed; everything else runs on pre-baked values. “In view” may go a little deeper than that, depending on the thresholds and shatter points of nodes and subnodes “in view.”

NOTE: While the above is true, even most of the things “in view” should have no need to for simming most of the time. Like carbon oxidation slowly spreading in fine lines along a half-burnt sheet of paper, the simulation will spread along the concept-space, leaving most of the space as static, with only small portions changing at any one time. Likewise, these simulation “fronts” may spread far outside the “in view” window, but hopefully they will remain light-weight and suitable for abstraction into metaphor.

Q: What if, for example, you are watching a security camera feed that is watching something else in another part of the world?

A: Right, so you’d sim you and your room, the TV, the camera, and what’s in view of the camera. And then those things might require than other things be simmed. And if the camera is pointing at your room… πŸ˜‰

Q: Actually, that wouldn’t necessarily be that horrible

A: Yeah, “duringSim” == true and DoSim returns. πŸ˜‰ I’ve got that instance in some (probably terrible) path-finding code I wrote for a KSP mod.

Q: Multiplayer is where this would get hairy. You’d pretty much have to simulate synchronously. Hmm. Mind = Officially Blown

A: Yeah. Dudecon’s thought, as I understand it, is that multiplayer gameplay will be hard to distinguish from singleplayer gameplay because, basically, you don’t know if what you’re seeing is real. So you and your friends may see completely different things, but those differences may be completely valid.

NOTE: Yes. Also, in the following discussion, a lot of hand-wringing occurs over deadlocks and resource management. Keep in mind that each character will be operating on their own isolated headspace. The industry-standard shared game space is simply not viable when dealing with SI of this quality and magnitude.

Q: So in other words, you just make changes to the state of things when you want, and hope that it all works out? And live with the errors?

A: Right. Rather, retro-sim plausible reasons for the errors.

Q: Something tells me that isn’t going to work as intended. From a programming standpoint, if this is multi-threaded, you will have to have some sort of locking mechanism on each object to ensure that the object’s internal state is consistent. But if you are accessing objects asynchronously, you have the risk of two viewers getting into a deadlock. “Q is trying to look up object Y property so he can manipulate object X, but A is looking up object X property so he can manipulate object Y.” Neither of them can proceed because both are locked waiting on the other object to become unlocked. (happens in transactional relational databases all the time)

NOTE: In such a case, the concept of “time” becomes quite useful. Due to the speed of light, we “look at” objects only in the past, and we “modify” objects only in the future. Deadlocks become much more rare in such cases, though there must still be a mechanism for dealing with them (as A rightly mentions shortly).

A: How do those databases resolve the problem? (I’ve long thought that most of this will get stored in a transactional relational database)

Q: They cancel one of the transactions and issue an error.

A: Hmm. So. This, I think Is where the parent node steps in and adjudicates, and it would basically do exactly what you said. It cancels one of the actions, simulates a new result, and gives you that. So, simulation is deferred whenever you act outside your influence.

For the above case to be a problem, X and Y are both outside of our influence. In that case, we both defer that sim up and request a result from the parent simulator, and it determines the result and tells us. So, perhaps we are both trying to grab something off a table. That something is outside our influence, so we defer the grab to the parent node. It gets both our grabs and says, “Q is faster. Q grabs the thing, but A’s hand runs into his, knocking it out of Q’s hand. The thing shatters on the floor, releasing toxic gas.”

NOTE: In the following discussion, “Ri” is brought up several times. She is A’s character in a role-playing game which A, Q, and I play together with several other people. She serves as a stand-in example of “a character in a game” as distinct from “a player playing a game” (which is A in this case).

The “parent simulator” is basically a GM. When I want to do things that are firmly inside Ri’s influence, I don’t need to ask the GM. I want to change her mind about something, or fix her hair, or pick a new spell to learn. I can do all of those things without deferring simulation up. This isn’t a world change; this local change that is within my influence. But, if I want to light someone on fire, I have to defer the simulation to the GM, who enforces the rules. He will require me to make a dice roll, or will make a dice roll and compare it against some character statistic of Ri’s. Then he will report the outcome. I do or do not light something on fire.

NOTE: And then, imagine that the GM is himself a character, playing in a larger game, and must ask his own authority for permission to do certain things. And then that higher authority is himself under authority… And so forth until you get dizzy and discover God.

Also, because a lot of this stuff is hidden, I can do it and you will never know. If I don’t say “Ri changes her clothes” you will continue to imagine Ri in the same clothes you’ve been imagining all along. If Ri changes her mind about something, you might not ever know, unless that opinion becomes relevant to the simulation at some point. Some of these things will get pulled by the display engine even when they might not in a tabletop RPG. For example, you won’t change Ri’s clothes in your mind unless I tell you to. (and even then you might not, but that’s another matter entirely) But, if I change Ri’s clothes in fledgeling, you will see the new clothes the next time Ri draws on your screen. Because they are plainly visible. And Ash is not blind.

NOTE: I think here A makes a false assumption about how much male characters care about the specifics of female character’s apparel. The point being, it is very possible to see something and not take note of it (as discussed below).

A: So, Dudecon’s basic theory is world changes (and thus deferred simulations) are comparatively rare. Internal simulation can happen asynchronously and without adjudication or even communication. We can both think about what color Mount Rushmore is at the same time, because we don’t need to defer that simulation or even look it up from a superior node; we both know what color we think it is, and will proceed with that whether we are right or wrong. And if you are really unsure what color it is, you might make one up based on archetypal information you are comfortable with (well, it’s a mountain, so probably brown), or you will refuse to simulate it and proceed without an answer, or you will decide to take some action to look it up, like searching it on Google. Simulating the Internet is another ball of worms entirely. πŸ˜›

Q: I don’t know if I agree with his assertion that it’s relatively rare. As far as memories go, yes, but anytime two people are potentially actively perceiving the same thing, you get a lot of conflicts. And determining that there are conflicts in and of itself is computationally expensive.

NOTE: Here Q seems to mistake perception and alteration. There’s no reason that there should be a conflict if people keep making copies of the same data. Maybe I’m missing something though. In any case, the parent (in whatever sense necessary) will certainly need to adjudicate access to “real” information and “real” access. Refer to the above talk about duplication of information, especially for the purposes of internal simulation. Since each character has their own world, they are never going to be stepping on other character’s memory-spaces, which should keep things significantly more civil, in a computational sense anyhow.

A: Right; that’ll have to get adjudicated synchronously in the parent simulator. The point is more “comparatively rare” and that most of the things we do are completely internal.

Q: Determining when there are conflicts to me is the real expensive problem, not so much determining how to resolve them.

A: Yeah, perhaps so. There are two things involved here, generally. The first is perception. Seeing a new thing is commonplace, but we rarely actually do a very deep lookup. You drove to work this morning. How many cars did you see? How many did you notice the color? How many license plates do you remember? The answers are probably “a lot”, “a few”, and “none”. So, if you’re not going to remember the license plate, why bother looking it up in the first place?

Q: True.

A: You did see it, but you weren’t ever going to do anything with it. You might remember a few of the cars you saw. Perhaps you saw a fancy sports car, and it struck you as cool, or you saw the car of someone you know, and them inside it. But the rest of the cars are just cars. You saw them, but in your memory they’re just “lots of cars”. So, why lookup the color at all?

Dudecon’s thought is that Fledgeling will handle this a lot like a GM handles presentation in a roleplaying game. Suppose we are walking through a forest. We ask the GM what we see, and he might say “A lot trees, mostly deciduous.” (or whatever that word is) He might even throw in some shrubs or fauna, depending on how invested he is in the presentation right then. Now, suppose the GM has an encounter planned here (which is why we’re even simming this bit in the first place). When we ask him what we see he says, “A lot of trees, mostly deciduous.” Then you make a perception roll, and he changes it to, “One of the trees looks taller and older than the rest.” The only tree that has any meaning to this encounter is that one. The location and footprint of a few of the other trees might be relevant, but only that tree has any real meaning. The GM doesn’t want to sim a whole forest. That would take him hours, and be a big waste of his time! But he can say “A bunch of trees” and we’ll imagine a bunch of trees. And if we know what deciduous means, we might even imagine the right kind of trees.

Q: Makes sense. So the only difference is, when fledgeling displays the scene, instead of relying on a human player to make a perception roll, it will decide, based on the player’s memory simulation and other physical properties whether or not they would make a perception roll?

A: Right. And it will also probably prompt individual characters (not players) to imagine what is around them, rather than pass it to them explicitly. In multiplayer particularly, in close proximity to people, it will probably pass explicit positions and object types. So, no matter what kind of trees we think they are, they’re at least in the same place and we can all walk around.

But, Dudecon’s idea for this does not mandate a congruous viewing experience. And it may not be a very traditional one. All trees might just be “tree” icons, basically, until we look more closely. Or, “deciduous tree” models if we are particularly observant. Details don’t matter until and unless we need them. So if you see an oak and I see a maple, who cares? That tree didn’t matter to either of us.

Q: Ah. that might be a bit strange to see a tree, then look closer and have it turn into a completely different tree. See, that’s where things get weird. Instead of always showing the same tree and letting the actual human player’s mind interpret it, you are changing the actual appearance. That’s an interesting approach.

NOTE: Quite so! We may also be playing characters with vastly differing perceptive abilities. Or characters who care about very different things regarding trees and their properties. The player should be playing their character, and not all characters are the same. There is no particular reason that the players will need to see the same thing on their screen for them to be perceiving the same game-world, especially through different eyes.

A: Right. Dudecon has also suggested not showing individual trees at all. Show something more nebulous that just means “trees”

Q: Hmm.

A: Maybe it’s an obviously matte backdrop of trees. Maybe it’s a cloud that bears the label “Lots of Trees.” I suspect it will be a mix of both. The background is just “lots of trees”, whatever that looks like. The foreground; say, within 20-50 feet, will have actual tree items, but they won’t be specifically tall or short or round or leafy until we actually look at the tree. Because most of the time, you’re just walking through a forest. And when you get home, and someone asks, “Where’d you go?” you say, “I took a walk through a forest.” And perhaps you say, “I saw this one tree; it was really gnarled and scraggly. It looked like lightning had hit it recently, but was clearly very old.” So, you looked up one forest, and one tree. Not 15,000 trees.

Q:You know maybe I’m missing something important, but why are you guys trying to simulate the human mind when you already have a player there with an actual human mind? Is it just for consistency with the NPCs?

A: For NPCs

Q: Ok

NOTE: This point is passed over with a few words, and it needs no more except for emphasis. Therefore allow me to re-iterate, if for no reason other than to stress this critical aspect. We are trying to simulate the human mind so that characters not controlled by the player (and even, to a large extent, the player character) will act in a recognizably intelligent way. Hopefully, we will even over-shoot human intelligence, and create something super-human (at least in its own realm of simplified simulated reality).

A: And also to help prevent “metagaming.” So, when we play D&D I know a lot of the rules. I know many, many of the spells, and I know stats for many of the monsters. When we fight a goblin fighter on a worg, I can make guesses about their stats and abilities and I’m probably right within 10%, 90% of the time.

But, Ri doesn’t know any of these things. She sees an ugly green man-thing on a wolf bigger than a horse. It’s terrifying! The burden is on me to imagine Ri terrified, even though I know that the encounter will probably be of middling difficulty. So suppose, instead of showing me everything, like D&D does, the game only showed me what Ri sees and knows? Ri doesn’t even know what goblins are, so I don’t see a goblin. The thing is ugly, unnatural, and green, so the game sims an ugly green scary thing to show me.

When you play NWN or KOTOR or somehting like that, you have an omniscient view of your encounter. There are 3 bad guys or there are 7. They all look a lot the same as other similar bad guys. Three renegade pirates? No problem. Seven? OK, let’s pay attention. If what the player sees actually varies based on their character’s perception of the situation (“there’s so many! we’re surrounded!”), rather than the truth of the situation (“there are 7 goblins, mostly in front of you”), it (hopefully?) increases depth.

Q: Ah. It will be different and interesting, for sure, to start doubting what you, as a player, actually see.

A: Right, exactly. And growing more accustomed to relying on the things your character has learned, as he grows in confidence about them.

NOTE: Well, that’s the end of the conversation for now! Please leave your own questions, or ideas in the comments below! Hopefully we’ll end up doing another of these discussions sooner rather than later. If you’d like to discuss these or other Fledgeling related topics with me, I’d be happy to talk to you. Ideas can not be stolen, they can only be made real.

3 thoughts on “Fledgeling Discussion 01

  1. In the case of the people talking, it seems that the surrounding nodes would need to broadcast their shatter conditions so that the node’s currently being simulated know what factors to calculate. If there are no nodes surrounding that care about sound, then figuring out that value seems like a waste, besides, it could always be figured out retroactively if one character needs to remember if they were shouting or not. Perhaps even, the only factors that need to be simulated are those that relate to shatter points of surrounding nodes. Each node would always be watching for a node that is being simulated within whatever range (distance, time, economic status, whatever) and then tell the simulated node the conditions it requires to be simulated in its own area. If one character has a behavior change when it perceives too much aggression, only then is the amount of aggression calculated in the conversation. Like sarcasm, its not even thought about (thus not simulated) amongst those who don’t care. Has this been thought about already?

    • Thanks for the interesting comment!
      Shatter points shouldn’t be actively brodcast to actors because they require perception to observe. Most of the time, the requirements will be imperceptible, so actions will need to be based on past experience, extrapolation, and speculation. For example, it’s nearly impossible to know beforehand how much sarcasm is required to offend someone, and these conditions change with time (just like avalanche conditions). In the end, even though the system will probably simulate it, it’s going to require a lot of guesswork on characters part unless they want to do a ton of research.
      That said, the local GM (archon is the term we’ve been using for this) will certainly need to know about these conditions. Like you say, if there’s a factor that is not a condition for any of the involved nodes, then the archon won’t need to sim it (pro or retro). Sarcasm probably isn’t a factor in setting off an avalanche, for example.
      But this raises a problem… which I’ll try to cover in my next post. Thanks!

  2. Pingback: Informed Action | Project Fledgeling

Leave a Reply to Luke Cancel reply

Your email address will not be published. Required fields are marked *