Wednesday, November 23, 2011

Darwinian_Coding: ( Talking to your AI's: Universal Language )


Context: where we use this
"Creature Dialog" covers how different video-game entities, such as Space Marines or Nihilist Droids, exchange knowledge and feelings, such as "might you know where is the extra food", "i am seriously hungry", or "give me your food now".  Creatures can communicate in many ways, including speaking words, smiling, waving hands, or lobbing grenades.  Often in our games or simulations, we code communication as giving the listening creature read-access to the speaking creature's data, such as 'here is the location of the rebel base' or 'my emotion state is angry'.  In a multi-threaded world, this is probably a copy of that information instead of just 'read-access', but in either case that information can be used to make future creature decisions.  This style of perfectly understood communicating makes creature dialogs predictable and easy to design and debug, which is a universally good thing(tm).  However, while coding a large predator-prey-producer ecosystem simulation (imagine a dinosaur-world or insect-realm), that same comfortable predictability eliminated surprises and withered any enthusiasm for observing creature interactions.  This dullness helped us to realize how important misunderstandings were to recreating believable and interesting interactions. We needed a way to code miscommunication to diversify each dialog and still be able to easily design/debug the system.

Goals: what we need

  • A service that lets a speaker creature deliver a message to a listener creature.  We will call this transaction-system a 'dialog', but it may be implemented as more than mere words such voice-tone, body-language or released-scents.
  • A means to model misunderstandings that can result in complex but trackable (log-gable!) responses.

Solutions: how we tried

Technique:  Predefined Confusion 
We support queries (give me your knowledge), responses (answers to queries), and commands (add this behavior to your plan list of behaviors).  All of those message types have a single piece of information which may have a misunderstanding built in to the information, such as the wrong location of the rebel base (an ambush) or begging for food (when the creature is already full).  It seems that many classic CRPGs follow a similar route in the dialog trees when players are communicating to NPCs.
Pros: Straightforward to understand what will happen and easy to control outcomes.
Cons: Doesn't really give any variety, just a separation between the truth (such as where the base is really located or how hungry are you) and a lie.
Technique:  Probability of Confusion
Building on the previous method, each dialog message has three possible outcomes depending on whether the listener is friendly (positive), unknown (neutral), or enemy (negative).  Each creature has a limited memory of creatures (a top ten list) and a -1..+1 value to describe like/dislike.  Before any query, response, or command is issued a random number is generated and scaled by the like/dislike value (if the listener is known) and classified as positive/neutral/negative.   This approach seems to follow classic pen & paper RPGs (like Dungeons &Dragons) alignment match ups or recent CRPGs like Bioware's Mass Effect or Knights of the Old Republic.  This needs to be augmented with a system that allows the like/dislike values to readily change based on interactions, which should be left to the reader-
Pros: Manageable complexity and a nice set of diversity compared to a fixed outcome.
Cons: Requires any piece of information, such as a location or hunger level to have 3 possible values to return depending on the like/dislike value for any queries.  Some of this 'false-data' can be auto generated to be the opposite of the truth or 'close to the truth', but many discrete items like 'what is the name of the traitor' or 'where are you going' need coding-rules or fixed designer-choices to make sense in the world.

Survivor: who proved best & why

Technique:  Emotion_Vector Confusion
Instead of using the scalar like/dislike and a random value to select between 3 possible outcomes of any given message, here we store an 'emotion-vector', such as x = angry, y = scared, z = tender, w = excited, s = happy, t = sad, which is attached to each dialog-msg that is sent.  For any dialog-msg, we dot product the listener's and speaker's emotion vector for each other to accumulate a "certainty" or trust value for msg.  This trust value can be used in any creature decisions such as "go to the rebel base" vs "its a trap" or "give the beggar food" vs "do not share".  Note that the emotion vectors can be retained for each pairing of creatures so it forms a more complex version of like/dislike for each creature-combo.  This is not ideal but approximates the ability for speaker and listener to relate to each other or to deceivingly attempt to relate.
Pros: Gives us quite a range of diverse behaviors.  When the emotions attached messages are allowed to influence the 'current-thinking' state of the listener's emotion_vector relative to itself, it follows expected models of anger begetting anger or fear, happiness cheering up sad or not-excited being dismissed by a listener.
Cons: Not only requires coding-rules or fixed designer-choices to provide believable misinformation as before, but now also requires initializing an emotion vector for various creatures or groups/archtypes of creatures so that cats start off being scared of dogs and dogs are angered by cats, etc. Future: Refine this approach into a means to visualize all the creature-creature interactions as a sea of dialog and be able to watch trends (if any).
Survivor_Datastructs
//===
//PSUEDOCODE

enum Purpose_Msg_t
{
 Msg_Query_k,  //seeks a 'statement' response
 Msg_Statement_k,  //has info
 Msg_Command_k,    //the info is a behavior script or value to change
}

typdef float Norm_t; //value from -1..+1

struct Emotion_Vec_t
{
 Norm_t Angry_f;
 Norm_t Scared_f;
 Norm_t Tender_f;
 Norm_t Excited_f;
 Norm_t Happy_f;
 Norm_t Sad_f;
}

struct Dialog_Msg_t
{
    Purpose_Msg_t purpose;
    Time_t Delivery_Duration__time;
    Percent_t Speaker__Clarity_f;
           //how easy was the message to understand
          //(adds a random +/- deviation to the emotion vector prior to dot product)
    Percent_t Listener__Trust_f;
      //stores the results of the dot product of the emotion vectors to associate
      //the information with a trust aka certainty level

   Emotion_Vec_t Emotion_vec;
      //adds a means to filter/misunderstand
 Percent_t Intensity_f;
   //if higher than 50% or so,
   //it is an 'exclamation' in the traditional sense

 Name_t Info_name;
  //such as "Rebel_base location" or "i am hungry"
 Infotype_t Info_type;
  //such as "location" or "hunger-level"
 Data_t Info_value;
  //such as "lat 42, long 12" or "very-hungry" but not used when a query
 };
How to code their communications & misunderstandings...

Darwinian_Coding: ( Importance of Misunderstandings in Creature Dialog )



Context: where we use this

"Creature Dialog" covers how different video-game entities, such as Space Marines or Nihilist Droids, exchange knowledge and feelings, such as "might you know where is the extra food", "i am seriously hungry", or "give me your food now".  Creatures can communicate in many ways, including speaking words, smiling, waving hands, or lobbing grenades.  Often in our games or simulations, we code communication as giving the listening creature read-access to the speaking creature's data, such as 'here is the location of the rebel base' or 'my emotion state is angry'.  In a multi-threaded world, this is probably a copy of that information instead of just 'read-access', but in either case that information can be used to make future creature decisions.  This style of perfectly understood communicating makes creature dialogs predictable and easy to design and debug, which is a universally good thing(tm).  However, while coding a large predator-prey-producer ecosystem simulation (imagine a dinosaur-world or insect-realm), that same comfortable predictability eliminated surprises and withered any enthusiasm for observing creature interactions.  This dullness helped us to realize how important misunderstandings were to recreating believable and interesting interactions. We needed a way to code miscommunication to diversify each dialog and still be able to easily design/debug the system.

Goals: what we need

  • A service that lets a speaker creature deliver a message to a listener creature.  We will call this transaction-system a 'dialog', but it may be implemented as more than mere words such voice-tone, body-language or released-scents.
  • A means to model misunderstandings that can result in complex but trackable (log-gable!) responses.

Solutions: how we tried

Technique:  Predefined Confusion 
We support queries (give me your knowledge), responses (answers to queries), and commands (add this behavior to your plan list of behaviors).  All of those message types have a single piece of information which may have a misunderstanding built in to the information, such as the wrong location of the rebel base (an ambush) or begging for food (when the creature is already full).  It seems that many classic CRPGs follow a similar route in the dialog trees when players are communicating to NPCs.
Pros: Straightforward to understand what will happen and easy to control outcomes.
Cons: Doesn't really give any variety, just a separation between the truth (such as where the base is really located or how hungry are you) and a lie.
Technique:  Probability of Confusion
Building on the previous method, each dialog message has three possible outcomes depending on whether the listener is friendly (positive), unknown (neutral), or enemy (negative).  Each creature has a limited memory of creatures (a top ten list) and a -1..+1 value to describe like/dislike.  Before any query, response, or command is issued a random number is generated and scaled by the like/dislike value (if the listener is known) and classified as positive/neutral/negative.   This approach seems to follow classic pen & paper RPGs (like Dungeons &Dragons) alignment match ups or recent CRPGs like Bioware's Mass Effect or Knights of the Old Republic.  This needs to be augmented with a system that allows the like/dislike values to readily change based on interactions, which should be left to the reader-
Pros: Manageable complexity and a nice set of diversity compared to a fixed outcome.
Cons: Requires any piece of information, such as a location or hunger level to have 3 possible values to return depending on the like/dislike value for any queries.  Some of this 'false-data' can be auto generated to be the opposite of the truth or 'close to the truth', but many discrete items like 'what is the name of the traitor' or 'where are you going' need coding-rules or fixed designer-choices to make sense in the world.

Survivor: who proved best & why

Technique:  Emotion_Vector Confusion
Instead of using the scalar like/dislike and a random value to select between 3 possible outcomes of any given message, here we store an 'emotion-vector', such as x = angry, y = scared, z = tender, w = excited, s = happy, t = sad, which is attached to each dialog-msg that is sent.  For any dialog-msg, we dot product the listener's and speaker's emotion vector for each other to accumulate a "certainty" or trust value for msg.  This trust value can be used in any creature decisions such as "go to the rebel base" vs "its a trap" or "give the beggar food" vs "do not share".  Note that the emotion vectors can be retained for each pairing of creatures so it forms a more complex version of like/dislike for each creature-combo.  This is not ideal but approximates the ability for speaker and listener to relate to each other or to deceivingly attempt to relate.
Pros: Gives us quite a range of diverse behaviors.  When the emotions attached messages are allowed to influence the 'current-thinking' state of the listener's emotion_vector relative to itself, it follows expected models of anger begetting anger or fear, happiness cheering up sad or not-excited being dismissed by a listener.
Cons: Not only requires coding-rules or fixed designer-choices to provide believable misinformation as before, but now also requires initializing an emotion vector for various creatures or groups/archtypes of creatures so that cats start off being scared of dogs and dogs are angered by cats, etc. Future: Refine this approach into a means to visualize all the creature-creature interactions as a sea of dialog and be able to watch trends (if any).
Survivor_Datastructs

//===
//PSUEDOCODE

enum Purpose_Msg_t
{
 Msg_Query_k,  //seeks a 'statement' response
 Msg_Statement_k,  //has info
 Msg_Command_k,    //the info is a behavior script or value to change
}

typdef float Norm_t; //value from -1..+1

struct Emotion_Vec_t
{
 Norm_t Angry_f;
 Norm_t Scared_f;
 Norm_t Tender_f;
 Norm_t Excited_f;
 Norm_t Happy_f;
 Norm_t Sad_f;
}

struct Dialog_Msg_t
{
    Purpose_Msg_t purpose;
    Time_t Delivery_Duration__time;
    Percent_t Speaker__Clarity_f;
           //how easy was the message to understand
          //(adds a random +/- deviation to the emotion vector prior to dot product)
    Percent_t Listener__Trust_f;
      //stores the results of the dot product of the emotion vectors to associate
      //the information with a trust aka certainty level

   Emotion_Vec_t Emotion_vec;
      //adds a means to filter/misunderstand
 Percent_t Intensity_f;
   //if higher than 50% or so,
   //it is an 'exclamation' in the traditional sense

 Name_t Info_name;
  //such as "Rebel_base location" or "i am hungry"
 Infotype_t Info_type;
  //such as "location" or "hunger-level"
 Data_t Info_value;
  //such as "lat 42, long 12" or "very-hungry" but not used when a query
 };

How to code their communications & misunderstandings...

Darwinian_Coding: ( Giving your Creatures Feelings )


Contextwhere we use this
"Giving your Creature Feelings" covers adding feelings to video-game entities, such as Space-Marines or Mutated-Chickens, to better emulate player expectations or create a more believable and responsive simulation.  By attaching 'emotional-tags' (love, hate, excited, scared) and amounts to a creature's existing knowledge units, we can model emotional interactions and possibly relate to the lives of our creatures.  The focus here is on simulated worlds where the creature behaviors are not scripted or fixed to always follow the same path.  In this context, we regularly run "time-has-passed-so-update-this-creature's-thinking" code.  By giving creatures feelings, we add a second dimension to all the existing thoughts with an 'emotional' component.  We can use that to re-order/prioritize knowledge, especially competing/contradicting thoughts that are deadlocked in importance/certainty, such as "why did it have to be snakes...i hate snakes" (fear, snakes, .800) versus "i must find the ark" (desire, ancient_artifacts, .811 ).
Remember that inserting 'feelings' doesn't make your game/simulation a 'touchy-feely/psychologically-soft' affair, it deepens what your creatures can do with their thoughts and makes their behaviors more diverse over time as these feelings wax and wane out of sync with the knowledge in the thoughts themselves.

Goalswhat we need
  • Means to attach an array of various feelings to any unit of knowledge
  • Provides means to weight decisions and distort existing priorities
  • Can store a series of feelings as sample-able field, like a texture map, to have metrics that can be scaled down when memory-size matters and to be able to filter for 'hindsight shifts'

Solutions: how we tried

Technique:  Like-to-Dislike scale 
Our first attempt at adding 'feelings' attached a single byte to each unit of knowledge representing -1.0 to +1.0 range of like to dislike.  It complemented the 'certainty' value that determined how much trust we had about the truth or completeness of the information.  We had been making a prehistoric continent simulation that had around 1,000 villages of about 50 people each.  In this simplified tribal village, each day hunter-gatherers left to search nearby areas for food.  Before we added feelings, we had each villager make a random decision about a direction to go and return home if it found food.  We had lots of wandering folks who repeatedly went to places that yielded no food and other odd choices that gave us the unlikely-to-ever-happen-this-way-vibe.  As we added feelings, we had them decide where each would go by sorting their 'thought list of known places' with the 'like' value (every known place started off neutral).  If they found food, they would increase the 'like' for that location and be more likely to head to favorite spots in the future.  They would also scale their decisions by the 'like' score they had for other villagers already headed that way (we let them be telepathic and just read each other's data to simplify a dialog system).  This created cliques of villagers who found success together and popularized searching similar spots.  At the same time, frequently returning to the same spots reduced the likelihood of finding food as the grazers (cow-like herbivore things) would move to new areas over time.  Adding 'jealously' further hardened these cliques as we let each villager raise/lower the like score for their peers based on whether their success was with or away from them.  Not too much code and we suddenly had packs that regularly patrolled certain areas, lone wolves who cover the same territories at different times, and a variety of intriguing movement.
Pros: Simple amount of code gave us some diverse behavior.
Cons: Didn't adequately capture all the range or competing feelings we wanted to add in.
Technique:  Contribution-Vector 
Instead of just 'like/dislike' scale, we let the designers add a bunch of opposing feelings such as love-hate, interested-bored, fear-desire, etc.  We treated each pair as a dimension and would dot product them all to weigh each decision (thus folding many feelings into the decision instead of the positive/negative)
Pros: Increased the factors in decision making and gave the designers more to work with for villager interactions.
Cons: Didn't really benefit until we had more means to express these feelings.  We probably should've added thought-bubbles above their heads to help players grasp the depth of their decisions.  Realized that opposing feelings often occur simultaneously, such as love and hate at the same time.  Need to store two unsigned values instead of a polarizing single signed score.  Lots of memory.
Technique:  Filtering-Memories
We started storing feelings in a history per villager each day.  Then we added the ability to query feelings in the past and we used a max filter to let strong emotions overturn the accuracy of memory such as "its always dangerous on the top of the mountain" instead of "actually, it was dangerous 3% of the time you went there".
Pros: Added a richness that satisfied designers.
Cons: Lot of code and processing went into providing subtleties that few could notice without a technique like a thought bubble to show. Tons of memory.

Survivor: who proved best & why

Technique:  Meme-Propagator using Mood 
Using the contribution-vector approach, we compressed feelings and let villagers share them saving memory but keeping a wider range of decisions.  At the end of each day a villager's individual thoughts were compared to others (a loop of memory XORs over the thoughts) and if it was over a threshold, such as 60% similar, we would collapse those thoughts and treat them as a 'group thought'.   We explained it as 'trending' or 'peer pressure' as it allowed us to consolidate many villagers with small amounts of processing and memory but still have a diverse society and unique individuals.
Pros: Space-efficient, faster than others.  Still provides believable yet interesting decisions
Cons: Doesn't always produce expected results.  "Irrational" or "psychotic" inconsistencies arise which may be realistic but can crush a game's narrative or simulation's utility if it spreads as a mood-meme.
Future: Connect feelings to facial-expressions and body language so players can read these changes without cartoony thought bubbles or subtitles.

Survivor_Datastructs

//===
//PSEUDOCODE
struct Emotion_Vector_t
{
    String_t Name_p;
        //name of feeling,
        //generally taken from an enum of 10 or so based on the game's theme
    Contribution_f[ Emotion_Dimensions_Cnt_k ]
        //an array of feeling state like love, hate, fear, desire, curious, bored, etc.
}
struct Feeling_t
{
    Emotion_Vector_t Emotion_Vector__ptr;
        //An index to a vector of 10 or so dimensions that
        //we use to combine together to change the 'importance of thoughts of knowledge'
        //when the entity with this feeling makes decisions
    Percent_t Strength_f;
        //how powerful is this feeling compared to others
    Time_t Realized_time; //when this feeling arrived at this 'strength' level
    Thought_t Begin_thought;
    Thought_t End_thought;
};

How to code the feelings of this Watermelon Hunter...

Darwinian_Coding: ( Implementing Creature Cognitive Replay )


Context: where we use this

"Creature Cognitive Replay" covers how we code video-game entities, such as Space-Marines or Martian-Primates, to learn and store 'knowledge', such as "where are my favorite forest hiding places", "is that large blinking-red object blocking my path dangerous?", or "how do i rebuild my destroyed wigwam".  Creating a believable, living world requires us to separate the omniscient simulation or 'game' knowledge from what an individual creature should be able to know, such as the difference between castle guards automatically knowing the player's position versus seeing/hearing an object move and having to investigate what the sound might be.  While every game's needs are quite different, the focus here is what approaches have worked best simulating 100s to 10,000s of unique creatures in a reasonably balanced ecosystem of producers (plants) and consumers (herbivores & carnivores).  The last three approaches we used were given between 8 & 64 MiB of active 'current-time' creature-knowledge-memory for PC desktop applications but could easily move to mobile or console spaces.  The 'replay' part of the title handles timeline based thinking so we can 'rewind/fast-forward' our simulation and the knowledge will be accurate/consistent.  In this discussion, we use 'knowledge' as a unit of information and a 'thought' as a piece of knowledge accessible to an entity which can change over time.

Goals: what we need

  • Speedy Queries: entities can ask questions about their knowledge to make decisions (Note: 'decisions' are made using an interpreted script or pre-coded behaviors)
  • Instancing & Breeding: supports 'generic' abstractions such as 'space-tiger' vs specific tigers and allows cloning, breeding, and other ways to mix any inherited knowledge when they are created at runtime.
  • Trending: can store the expected/default/popular thoughts (everybody knows that SpaceCola increases your speed) vs fringe/unique thoughts (SpaceCola decreases my health)
  • Trust: supports 'certainty' to help decide how reliable the knowledge is to the entity
  • Origin: stores knowledge from different sources, such as instinct vs communicated vs sensory vs analytical thought-process
  • Rewind: handles recording/playback of each creature's knowledge over time to permit simulation rewind and cognitive replay

Solutions: how we tried

Technique:  Straightforward-Object-Oriented Programming ( SOOP ) 
Our first attempt at representing "knowledge per entity" used classic Smalltalk/C++ object-oriented programming with a hierarchy of 'classes of knowledge' and a vast virtual-function interface to communicate with each class.  Templates were not supported back when we tried this approach and we had a huge base of files to handle any interactions with the knowledge and ownership-access.  We had to anticipate any possible query an entity might want to make ahead of time to match the entity type (Gorilla) with the member functions of the knowledge types ( Mammal, Physical, Jungle_Prowler, Nest_Builder ) and the permutations quickly got out of hand.
Pros: Easy to code & debug using a design doc for each type of entity.  Given fixed entity behaviors, little to no design changes when coding, and few iterations of tweaking the behavior code, this is a safe, reliable choice.  Easy to rewind/fast-forward given fixed structures of the SOOP approach as we can just difference the bits between two structures and run-length/huffman-encode those deltas.
Cons: Creating new queries required new code which was expensive in time and couldn't be done by a designer.  Given the classes were fixed at runtime, we could not create new knowledge-classes or modify their member types in real-time.  Using over 600 knowledge-class files took a long time to compile and made a ginormous executable.  Given the number of classes involved to instantiate most entities, such as dog or cat or space-tank, it also took a while to create and destroy creatures, which did not scale well to large numbers.  While a straightforward OOP is an appealing approach to begin with, maintaining/evolving the code was not well suited to the real-world development demands of sudden/frequent design changes and complex iterations of tweaking behaviors.  After missing two milestones, we reworked our schedule to switch to the TAM approach below.
Technique:  Tuple-Arranged-Minds ( TAM )
Inspired by the admirable and rich AI legacy of functional languages like LISP, Prolog, & Scheme, we decided to implement something similar that interfaced with our existing codebase.  Back then in the mid-1990's, we simply couldn't find an 'open-source' engine to plug-in, so we gave each entity a red-black-tree of linked lists.  Each 'list' node had a knowledge-type string such as enemies, goal_location, or favorite_weapon and one or more thought-strings such as 'robots, mosquitoes', 'treehouse', or 'shotgun, gasgun, sonic-screwdriver'.  We called these nodes a "thought-tuple" and we used it for function-style queries, such as returning 'no' for "Does_like( self, mosquitoes )".  Most importantly, we could combine queries such as "Create_Match_List( my.enemies, my.favorite_weapon, my.enemies.favorite_weapon ) Sort_by Distance( enemies.location, my.location )" to find an ordered list what enemy we should hunt next to stock up on our favorite ammo.  All of the strings used were stored per game level (a simulation-scene in a particular space and time).  We avoided storing any string duplicates by giving each entity's thought-tuples fixed indices to strings (similar to LISP's atoms).  On the downside, we never destroyed knowledge as reference counting was too expensive and this resulted in some levels growing too large.  We tried building an 'amnesia-processor' to search all entities and reduce any unused knowledge or collapse new thought-tuples into old ones, but this simply generated weird behaviors.  To handle the rewind/fast-forward for cognitive-replay, we kept a list of offsets/deltas to each entity's tree of thought-tuples but this required updating the tree for each change.
Pros: TAM had tremendous flexibility in creating new knowledge.  If the designer can write it in a sentence, it can be stored for an entity.  TAM let designers edit in realtime using text strings and make any sort of query they could consider.  TAM permitted real-time cloning/breeding of entities and there behaviors diverged based on what happened to them over time.  TAM had easy life-cycle management of code ( around 15 files ) compared to SOOP.
Cons: Sometimes it was difficult for designers to use and understand the results of their queries.  Although quite flexible, it ran terribly slow for all but the simplest queries.  Modest queries that ran @ 1 or 2 Hz for each creature, such as "Distance( target.location, my.location )" could back up and stall the main loop when walking the thought-trees for 100s of entities ( Pentium Pro & II era ).  While the thought-tuples delivered some of what we needed to simulate large evolving worlds, the red-black-tree w/ pointers to lists of strings approach had too much overhead in answering queries.
Technique:  Individual_Mind_Packages ( IMP )
IMP was an approach to collapse the string-storage of TAM's thought-tuples into pre-defined binary types for faster processing and compact space.  Packaging everything into a linear memory block of 'thoughts' per entity greatly improved cache-coherence.  Instead of a red-black tree that matched strings and returned linked lists, we created a hash-key for each type of knowledge and used it to retrieve an offset into that linear memory block or 'mind-package' to retrieve the knowledge queried.
Pros:  IMP translated most queries into only a few instructions which made it quite speedy compared to TAM while retaining the same flexibility of run-time queries.   Designers really needed those run-time queries to rapidly iterate on behaviors as they made game levels.  IMP gave us fast create/destroy mechanics as we only needed to allocate/free a chunk of memory in a pre-allocated pool.
Cons: Although we saved space by compacting thoughts into various binary types, this approach required each unique entity to allocate all the possible knowledge it could ever store upfront.  The upfront cost was because we needed to re-use the large array of hash-table offsets per entity type and that 'large-array' was usually larger than the size of per-entity knowledge.   This limited what an entity could learn/store, such as only 5 friends and 3 enemies, and used the same amount of space as if each entity had learned/stored everything it could.  Since all entities started off using 'defaults' or a blended combo (chromosomal-style mixing from two parents), most of the knowledge was being duplicated, unlike in the TAM system.  That made changing any defaults cause a large hiccup as we iterated all entities who used those values.  IMP is hard to parallelize for large (10,000+) numbers of entities doing lots of queries per update as we can't lock each unit of knowledge without increasing its size with lock/unlock semantics.
Survivor: who proved best & why
Technique:  Sparse_Individual_Thought_Tables ( SITT ) 
After analyzing several game levels we realized that around 2/3s of our entity minds were still using defaults as many entities would converge on similar thoughts or never really had an experience that changed their initial thoughts.  This lead us to extracting an entities thoughts into database-style tables for each knowledge type.  This compacted per-entity storage by using the 'atom' technique from TAM (and indirectly from LISP) for all of the default/inherited/trending information and still gave us unique, changing thoughts.  Now each entity has its own skip-tree of thought-atoms that would reference the knowledge as well as its access/ownership traits in the event of any changes.   Cognitive replay becomes much more efficient as previous thought values are likely to be stored in the same database table, acting like dictionary compression before we even begin to compare the thought deltas.
Pros: Having removed the large linear memory block per entity and large hash-tables per entity type, we can efficiently run on multiple threads by using atomic compare-and-exchange to lock the database knowledge instead of the entire entity.  That means that for a single entity at one time there can be a physics thread can update forces, a thinking thread determining a new set of goal-priorities, a path-thread planning obstacle avoidance, a communication thread interpreting (or misinterpreting) dialog, and a metrics-thread tallying up statistics that this entity remembers.
Cons: For behaviors that access a lot of different thoughts for a single entity, the cache-coherence is poor as many different database tables are loaded.  Requires a lengthy processing step to separate the thought-database tables for batches of 'game-levels' or simulation-scenarios.
Future: Work on ecologies with millions of creatures and see how multi-CPU-cores or GPU (OpenCL?) processing does with complicated behaviors and frequent changes to entity thoughts.

Survivor_Datastructs
//===
//PSUEDOCODE
//===

//THOUGHT_ACCESS
enum Thought_Access_t
{
Thought_Access__Universal_k,
//fixed, accessible by all,
//generally used for math constants or universal enumerations like
//the 6 3D cardinal directions (up/down/left/right/ahead/back),
//names of places, or the 7 colors of the rainbow, etc)

Thought_Access__Entity_Type_k,
//thought that is stored specific to the "abstract description" of the entity, such as
//all cats, or all siamese-cats, etc.
//This is where you'd store knowledge that all cats would
//have such as 'love mice', 'hate dogs' or 'fear vacuum cleaners'

Thought_Access__Shared_k,
//these are thoughts that are shared amongst a group, such as 'who is the Orc leader we follow' //or what is our 'marching-song' or 'slogan'.  Any change here happens for the entire group

Thought_Access__Unique_k,
//Unique thoughts that can be changed for a unique individual,
//such as favorite_food, suspicion_of_traitor_ID, or hunger_level
}

//===
//THOUGHT_SOURCE
enum Thought_Source_t
{
Thought_Source__Instinct_t,
//Inherited instinct
//or taught/told by a trusted person so long ago it is equivalent to instinct

Thought_Source__Sensed_t, //Sensory experience, as in seen, heard, tasted, etc.

Thought_Source__Communicated_t, //Thought came from dialog or body language communication

Thought_Source__Thinking_t,
//Thought came from examining existing thoughts
//and generating new ones using code
}

//===
//KNOWLEDGE_FORM
enum Knowledge_Form_t
{
Form_Number_t, //Quantity
Form_Name_t, //Reference or ID#
Form_Style_t, //Quality
Form_Collection_t, //Aggregate of more information units (deeper type)
}

//===
//KNOWLEDGE_LOCATION
struct Knowledge_Location_t
{
Database_Table table;
Database_Key Begin_key; //Represents a range of keys that can be used
Database_Key End_key;
}

//===
//ENTITY_THOUGHT
//Each Entity has an array of "Thoughts" for the types of Knowledge it
//can understand
struct Entity_Thought_t
{
Thought_Access_t access;
Thought_Source_t source;
Knowledge_Location_t loc;
Percent_t Certainty_f;
Time_t time;  //when this thought occurs, helps to interpolate between past and future thoughts
Linked_list_t Prev_thought; //reference to previous thought
Linked_list_t Next_thought; //reference to next thought
}

//===
//KNOWLEDGE_UNIT
struct Knowledge_Unit_t
{
Entity_t owner;
Knowledge_Form_t form;
Information_Unit_t value; //int, float, unsigned_int index, etc.
}

//===
//END
How do we code what these beasts know...

Wednesday, November 16, 2011

Darwinian_Coding: ( Cultural_Text )



Context: where we use this
"Cultural Text" handles rendering text into images to display on-screen, use in textures, or export as a resource file. This set of services is used to correctly format different languages, cultural styles, and font features such as color, strike-thru, indent, or shadowing. It also encompasses managing sentence layout, glyph and line spacing, word-wrapping to borders and optimal legibility tradeoffs with anti-aliasing & pixel granularity.
Generally this is the classic "render-text" functionality found in most video-game engines or provided in libraries like FreeType, with support for Text-FX to display richly formatted layouts.

Goals: what we need

  • An API to render text into arbitrary destinations with pixel-perfect consistency across different platforms & screen sizes
  • Can combine multiple glyphs on top of each to form a single character (needed for some languages)
  • Can output with colored gradients, shadows, and other visual effects based on a markup encompassing 1 or more characters
  • Supports word wrapping, skipping ahead to a fixed column, and directional 'justify' modes
  • Can handle a full screen of legible text with changing values without performance slowing down
  • Can support fixed as well as variable width layout
  • Can reorient to render text in opposite directions as the usual flow ( such as english letters arrayed vertically )
  • Supports Logging (to the standard console as well as HTML)
  • Provides for user-controlled (not programmed) 'variable precision' control for decimal numbers, time and date formats
  • Can display two different languages side by side (helpful for in-app language translation) but more importantly allows using rectangles from other images for embedding icons, emoticons, avatar pics, and camera viewport images
  • Text can be transformed, such as moving, bending, squishing, etc
  • Has a method to import data from existing font files

Solutions: how we tried

Technique:  Letter-Only-Blitter aka 'Lob'
Originally we looked at some of the font engines available but none met all of our platform needs so we decided to generate our own.  We built a gridded font texture for ASCII characters and generated the used subset for Japanese (hiragana, katakana, & ~500 kanji).  It was used on games in the mid 1990s and on the PSOne, so there was little text on-screen compared to these 2560x1600 modern times.  The process converted UTF-16 characters into a texture-page and rectangle-index for the characters themselves.  We stored a flag in the top bit of these 'characters'  to act as an escape to trigger offsetting the top vectors (italics), upscaling (low-quality 'bold').
Pros: Ran reasonably fast with hardware blitters or CPU software copying.  Allowed real-time typing and editing of the Text-FX like in many wysiwyg editors.
Cons: Could only handle left-to-right layout.  Only supported english, spanish, french, german, and japanese (there were a fixed enumeration) and further languages would've required a lot of table-adjustments and possibly other coding to use.  Required artists to fill in 'rectangle-text-files' and build the fonts manually (no font-file extraction) which was painful.  Runs terribly slow on systems that have a high penalty for each draw call (modern era).  Japanese used a lot of memory which required the font to be scaled down which resulted in unsatisfyingly blurry text at the time.  Even now, compositing the characters using radicals could save meaningful space and deliver a broader range.

Technique:  Word-Particles aka 'Wopa'
This technique was spawned primarily from the batching issues ( costs per draw-call ) found in the Lob system.  The idea was to have two systems, one that rendered words in powers-of-two-sized rectangles inside of large textures, and one that composited sentences to their destination.  We initially had a big speed boost on systems where batching mattered but the code became complex to handle optimal fitting of the words into the 'recently used words' textures.  We had to handle a downsizing approach when too many unique words were needed at once. It did allow us to render Japanese at a much higher resolution than Lob did however.  We used a fast-hashing scheme to identify which words were stored in which rectangle/texture.  We were forced to constrain the text FX approach to be per-word, which affected color tints, italic-tilts, and shadowing effects but mostly didn't limit our artists' goals.
Pros: Automated tools generated the texture information from images.  Avoided batching-woes.  Regular text browsing, such as in 'help' menus or information updates, went very quickly.
Cons: On systems that didn't support render-to-texture, speed suffered due to poor Copy_Pixels_from_Screen_to_Texture or Texture_Upload times (when we rasterized the words on the CPU).  Rapid number updates could cause stutters as the mru-word textures could get overloaded.

Technique:  Just use the OS aka 'Juto'
After struggling to properly handle Arabic, Thai, Hindi, Hebrew, and the various languages using Chinese characters, we decided to use the native OS capabilities to composite text into a buffer and upload that to the GPU for rendering or to format for export.  This approach allowed us to skip many of the complexities that has cost so much time in Q&A.  As this happened before  Microsoft's DirectX "DirectWrite" API, we used GDI+ on Windows, FreeType on Linux & BREW cell phones, and Cocoa on OSX.
Pros: Most of the foreign language single-word issues were handled correctly.  We could reach a broader audience and support translators easily.
Cons: It was costly per draw call & update.  Adding features like underline or colored letters became very complicated due to tracking various sizes and issues with the OS allocating buffers (not FreeType however).  Foreign language paragraphs still had a lot of complexity and required different per-platform coding-responses.  Most of the Text-FX features were inconsistent from platform to platform.

Survivor: who proved best & why
Technique:  Cached lines of Variable-Interval-Composites aka 'Clovic'
Clovic is an outgrowth of Wopa that relies on caching entire lines instead of words.  It uses a simple string sorting/matching approach to determine what text in on what line.  Each 'text-texture' is broken into a series of lines.  Each line is packed using half powers of two...such as 2, 3, 4, 6, 8,12, 16, 24, 32, 48, 64, etc. which gives better coverage than the previously power of 2 W x H rectangles.  We convert UTF-8 directly into texture/rectangle references as before, but allow mixed language compositing to support icon-based values...such as the signal or battery-life indicators on your phone.  The text FX have been unified with the regular "render things into  a viewport" visualizer language so that each 'cache-line' of text can support all of the visuals (blur out, HDR, movement) of any regular 3D scene.  For speed purposes, we update each of these lines at a slower rates than the main display rate.  Values like health or location coordinates which may rapidly change seem acceptable to update at 10fps instead of 30 or 60.

Pros: Simplified the per word fitting schemes of Wopa.  Easier to handle the multi-cultural language layouts with a per line (aka continuous run) to handle kerning/spacing issues.  Makes true bold or outlining ( using a bloom filter, not rescaling ) much cheaper and more accurate.
Cons: Hard to tune memory use and requires overestimating the amount of text needed.  Currently aliasing approaches are not well suited ( MLAA, FXAA ) and aliasing effects are apparent.

Future: It would be good to automate a method to show progressive ordering of the handwriting 'strokes', which are important to know well in many languages, especially any employing Chinese Characters.  This stroke approach could make for interesting visual effects as well as the obvious educational aspects.  There also can be value in providing a mechanism to displace the rendered text into 3D shapes (likely a height field where height 0 is an edge) or back into vectors for SVG support.  Mostly, the future should hold more robust versions of different languages and the interesting nuances of rendering messages correctly.
( Lamely the below image, made with Lob, has been JPG'd so some Text is blurred...)

(Here text is composited from other views)








Darwinian_Coding: ( App_Services )


Introduction: why read this blog

Welcome:  Have you ever had (or made) enough time to thoroughly re-evaluate past project coding choices?  It can be a humbling although entertaining activity.  After nearly 30 years of video game programming, i realize that time-scales really affect how you see things, whether it's an 'immediate moment' choice of a variable name, a 'short-term' decision on a comment's level of detail, medium-term class diagramming/system planning, long-term design document, or the after-shipping post-mortem analysis.   Since reviewing past projects revealed how much i couldn't see at those shorter time scales, i'm targeting this blog on the patterns that are easily missed.   The goal being to caution your design-considerations against my mistakes and share some pros-&-cons of past solutions.  While veterans may find little newness, i hope novices or those currently developing similar systems might at least spark some dialog on their choices.
Hopefully the phrase 'Darwinian Coding' evokes the notion that the most successful coding choices, particularly for long-term large-projects, are not always the most powerful/optimal but instead the most adaptable. In my journey, the most important metric of code has been how it survives change. Whether drifting design docs, shifting QA feedback, blame-ridden profiling results, OS/Bios/Driver/SDK/Compiler updates,or even new hardware/net-services, change can cost time & innovation. Code-survival means less time re-writing, debugging, and integrating to adapt to those changes.
The bulk of common programming advice, such as Knuth's 'premature optimization is the root of evil' or 'profile-early/profile-often', or 'metrics-metrics-metrics' seems boil down to the childhood warning against relying on 'assumptions.' Ironically, as coders we are often forced to make assumptions in the interest schedule-realities, needing to delegate tasks to others, relying on trusted advice of experts, ease of cut'n'paste code, or sometimes lack of other options. Code survival relies on identifying those assumptions and finding the patterns in them that will pop-up elsewhere.
Each blog-article will cover these 'assumption-patterns' given a problem's context (where we use this code), goals (what we need from the code), solutions (how we tried to solve it in the past), and a survivor (who has proven best over time).
The biggest survival sagas (aka changes in coding approaches) in the past involved:
  • App_Services ( Providing asynchronous processing capabilities ) ---entry below---
  • Reversible_Timelines ( How to build a rich-AI & physics world with rewind/fast-forward & network propagation )
  • RPG_Improvising_Rules ( Using Pen and Paper RPG flexibility in rigid Digital Simulations )
  • Knowledge_Representation ( Signs, symbols, relationships, inferences, remembering, forgetting, etc )
  • Energy_Propagator ( Propagating energy through matter, managing error metrics and logarithmic scales, suited for invisible RF/sound/thermal or imaginary 'energies' since we are too sensitive to aliasing/roughness/inaccuracies in the visible spectrum )
  • Behavior_Creation ( Using feedback loops to adjust behavior trees and mixing actions to generate complex behavior recipes )
  • Security_Islands ( Methods to protect Application-services, User-choices, and Simulation-events/thinking means to secure/limit data changes )
  • Shape_Synthesis ( Constructing destructible and animatable shapes for rendering, physics, and general AI queries )
  • Possibility_Mapping ( How to synthesize new animations through constrained mixing of local animation areas )
  • Bit_Shipping ( Methods to combine transforming, compressing, encrypting, and transferring raw bits of known data stream types )
  • Cultural_Text ( How we  manage dynamic paragraph layout, resolutions of mobile vs many screens, cache rendered words/sentences, rich font formatting, aliasing, writing orientations )
  • Coding_Productivity ( What choices/habits have universally been of benefit, code-generators using scripts, including naming conventions, file/make organization )
  • Project_Productivity ( What hurt/helped projects to get finished on time, auto-generated documentation & in-game bug-tracking, expectations vs humility vs drive )
  • Self_Balancing_Metrics ( How to dynamically adjust analog sensory subsystems ( graphics, audio ) and discrete subsystems ( physics, AI simulating ) to balance quality vs interactivity (frame-rate) )

A visualizer that relied on App_Services to balance network streams (as i can't find a thread-specific image)
Application Services

Context: where we use this

Application services is a name for how your software engine provide services, such as rendering an image or loading a file, to the actual product.  App-services are usually library or system calls, often directly spawning threads, or adding a task to a job-stealing pool.  Sometimes they use networking to contact another computer or network domain to make requests.  In all cases, this is how we harness all available CPUs and other processors as well as network access to deliver the best performance (at least best under a given a battery-budget/power-settings).
App Services provide background tasks such as filling or mixing audio buffers, file I/O, monitoring network messages and decompressing assets.  In this context they can also provide immediate foreground services such balancing rendering loads where there would be app services to mix geometry, search a list of text, count valid elements in an array,  or walk a scene graph.

Goals: what we need

  • An API to schedule 'work' which is a computing 'service' and some context parameters ( this means a list of what services are provided and what data they need to run )
  • Work is schedule with a time to start and an expected duration which provides prioritization hints to help balance the processing power
  • Supports profiling individual work-items, and the overall idle vs active efficiency of the App-Services system
  • Can balance work load across available resources ( 1 to N CPUs or special processors )
  • Can route work to particular processors ( such as GPUs, PS3 SPUs, or a particular thread, as is needed for Direct3D calls or thread-unsafe libraries )
  • Changes to shared data are atomic ( no side effects if several services are all accessing the same variable )
  • Supports an event graph using triggers ( finishing or starting a work item can launch dependent work with appropriate timing )
  • Application can be frozen and stored to disk to be resumed later ( for anywhere application-saves, reproducing bugs,  and power-management issues )
  • Cross-platform & OS-friendly (can manage Dll / Dyld challenges)
  • Responds to the standard 'life-cycle' of software systems which has Create, Destroy, Pause, Resume, and Run (update) modes to properly handle exiting when a fatal error occurs or a manageable error requires recovery.

Solutions: how we tried

Technique: Mega-Loop Cooperative-Yielding aka 'Melcoy'
In the 90s we started with a simple application-managed 'cooperative-multitasking system' that had a single 'Run' function which iterated over a linked-list of current work-items to provide the product the services it needed.  Each service function was written with a series of calls to Fn_yield() which pushed all the local variables to a pre-allocated stack that each work-item kept for the service it was performing.  Then the 'Run' function would switch to another function in the work-item list.  There it would restore those local variables and 'goto' to the previous Fn_yield label location to resume executing that code.  We used a lot of macros to declare local variables as part of a struct that was mem-copied quickly and let us have type-information for debugging.  It was a complex system but it did provide asynchronous activity at a time when all consumer machines had a single processor and stalls were common.
Pros: Didn't have overhead of task switching that others systems had at the time and it gave us loading with audio and interactivity which was novel at the time ( and a design choice to drop given that audio and interactivity slowed down loading )
Cons: Required 'Fn_yield()' calls throughout all the service code which made it hard for others to write code that played well with others.  Although it was easy to profile each function, it was very hard to maintain even performance as Fn_yields could give radically different performance depending on the data they were using or if I/O was involved.  Scaling to many simultaneous services required a lot of memory at the time this technique was used.  Frame-rates stuttered until all the yields were tuned.

Technique: Preemptive Service Manager aka 'Presm'
Inspired by how most operating systems handle pre-emptive multi-tasking, we used one low-priority thread to monitor one or more 'worker' threads and used 80x86 interrupts to halt execution, push/pop assembly for custom push/pop of the existing registers onto the stack
Pros: Learned a lot while coding this.
Cons: Complexity in coding.  Bugs resulting from the register push/pop choices.  Weird cases that were hard to reproduce.  No ability to save conditions to disk as its implementation was address/register-specific which didn't stay constant between executions

Technique: Micro-Functions (hierarchy to scale) aka 'Mif'
Next we tried to break our services into 'micro-functions' that could be streamed together to form the larger 'service-oriented' functionality as needed. Initially this gave us the ability to paused and resume any given service since it was being represented as a series of instructions, which effectively made it a cluster of Virtual Machines.  To provide asynchronous behavior we would simply execute a series of micro-functions from one work item's service then switch to another continuously round-robin.  Used a priority ring that kept work-items like mix-audio and update-user-input high up and file I/O 'read-next-chunk' low-down.
Pros: Easy to debug in realtime (can edit these instructions) and simple to understand. Although it still used a single thread of execution, it was very responsive even with heavy I/O loads.
Cons: Although it was responsive in terms of consistent user-input affecting the world and screen, the background-task performance was awful for complex activities.  Iterating 1,000s of entities in a scene, animating geometry, image-uploads, and polynomial solvers would take a long time to get done which required fallbacks such as blurry visuals or stuttered animations.  More importantly, any atomic synchronizing of shared data had frequent long-duration stalls due to contention. Having such a small granularity of functionality prevented us from gaining access to a shared variable once as we'd have to acquire, either read or write, and release it for each micro-function. This cost added up quickly and even had the unfortunate result ( in 2008 ) of running faster on a dual-core than a four-core Xeon of the same speed.

Technique: Specialized-Threads aka 'Spet'
In an attempt to honor the 'simple over clever' principle, we finally built a traditional 'specialized threads'  approach.  This technique simply launched threads for services when they were needed and used a message passing system to flag when the task was done.  Audio mixing, animation blending, collision detection, etc. were all their own simple thread running one dedicated function.
Pros: Let the OS handle the schedule.  Code is simple and easy for external coders to understand and modify.  Simpler to graph profiling data and think about most services.  Integrated better with existing debugging tools than other approaches.
Cons: Given the specialized nature of each thread function, profiling and balancing became a lot of work.  Moving to a different hardware configuration could wreak havoc on timings.  OS thread calls didn't perform consistently and timings between thread rescheduling could wildly vary.  Easy to understand but very hard to balance without rewriting the specialized functions.

Survivor: who proved best & why

Technique: Work-Scheduler aka 'Wos'
Motivated to avoid the costly tuning required to balance dedicated thread-functions and scale performance up to use the hyper-threaded P4 ( in 2003) and the inevitable dual-core ( 2005 ) machines, we tried running generic threads whose main 'run' function pulled work out of a queue to provide dedicated services.  This was an evolution of the Mif approach described above made for 3 or more threads.  We used a red-black tree to sort the timings of tasks and an 'association' hash to characterize each work item.  Each thread had its own local work-queue which it would pull work-items from and potentially add back into.  After reaching a threshold number of work-items completed or added, the per-thread work-queue is merged back into a system-wide common 'work-queue' to eliminate contentious stalls over the common queue.  There is a singleton Work_Scheduler that owns the common queue as well as the per-thread queues and is responsible for signaling threads to be notified of application-life-cycle changes such as pause, resume, shutdown or error.  Wos works on 'service-functions' that can be large and take a lot of time or very small and fast to execute.  A key difference from the Mif micro-function-only approach is in profiling and estimating a finishing time for the service to better manage hiccups in the schedule and a minimum of 3 threads with some services only executing on a prime 'user-input/OS-message' thread and the other two allowing long-term background services to run uninterrupted.
Pros: Has enough information at run-time to self-balance performance.  Simple to use and easy to view profiling information.  Although untested on more than 64 cores at this time, it has shown steady performance increase with more cores and it can scale down to use less processing to save mobile battery or play nicer with other apps.
Cons: Overhead from translating the service-function data into actual local variables to use.  Atomic access to shared memory impact performance of 'small-services.'

Future: For many years this approach has served us well, mapping nicely to mobile (iOS & Android) and allowing us to take advantage of the many cores now found in workstations.  Future cooperation with different processors such as GPUs, especially being able to issue their own tasks from within OpenCL-type code may require big changes or may fit in with the existing Work_Scheduler as a new block of Work_Queues.  If hardware progresses to allowing us even cheaper profiling and a safe capability to generate CPU instructions tuned to the current task, we could see a new iteration of this Work_Scheduler pattern that builds the actual code for the service implementations based on its context parameters and its past-performance.  Given faster local network connections, perhaps the Work_Scheduler could be extended to local clusters, like super-computers operate.  In all cases, it will be interesting to see how this survivor-solution fares in the next 8 years.

Survivor_Datastructs

//===
//STRUCTS
//Using C & psuedo-Container-templates
//where
// Usec_t is microseconds used as a 'double'
// Msec_t is milliseconds used as a unsigned 32 bit int
// Array_Ref is an array container class that provides fast access and linear memory
// List_Ref is a singly-linked list class for building 'trigger'
// Dict_Entry__Ref is a dictionary entry class that connects to a dictionary container class that is searchable
// Context_Param_t is a union of various data types and a member that indicates what type is stored & basic constraints.

//===
Struct Work_Item_t
{
  //---
  //External Ref
  Dict_Entry__Ref< Profile_t > Profile_p;
  List_Ref< Work_Item_t > On_Start__Worktm__Next_p;
  List_Ref< Work_Item_t > On_Finish__Worktm__Head_p;

  //---
  //Locally Aligned Members
  Work_Category_t category;
  Work_Func_t func;

  Context_Param_t A_param;
  Context_Param_t B_param;
  Context_Param_t C_param;
  Context_Param_t D_param;

  Msec_t Desired__Start_msec;
  Msec_t Expected__Duration_msec;
  Msec_t Required__Finish_msec;
};

//===
Struct Work_Queue_t
{
  //---
  //External Ref
  Dict_Entry__Ref< Profile_t > Profile_p;

  //---
  //Locally Aligned Members
  AtomicToken64_t Item_access;
  //used to atomically grant exclusive write or shared read access to this struct

};

//===
Struct Work_Schedule_t
{
  //---
  //External Ref
  Dict_Entry__Ref< Profile_t > Profile_p;
  Array_Ref< Work_Queue_t > Local__Work_Queues__array;
  Work_Queue_t Common__Work_queue;

  //---
  //Locally Aligned Members
  uint32_t Work_Items__Total_u;

  //Simple Schedule Assessment of work activity vs idle time
  Usec_t Idle_Work__msec;
  Usec_t Active_Work__msec;
  float_t Efficiency_Ratio_f;

AtomicToken64_t Array_access;
//used to atomically grant exclusive write or shared read access to this struct
};

Monday, November 15, 2010

Digitally-Enhanced In-Person Gaming

My current thinking goal is to enhance our weekly RPG sessions, with pen'n'paper'n'dice, book-searching, & sometimes miniatures/maps with modern tools...the smartphone or tablet (iPad).  Primary benefit is quicker-access range of options as well as a richly-detailed experience.  It'd allow showing/sharing the available choices, probabilities, narrative-possibilities for different styles of RPG-gameplay.  Particularly those with player-created plot-elements (Aspects to tap, etc)

Wednesday, November 10, 2010

Thought about Humanity's future...

"...If the Industrial age was about the dominance of the machine over nature, the digital age is about the reinvention of nature through technlogy.  We now have the power to adapt the environment to ourselves, to redesign nature according to our own specifications.  Our mission is no longer merely survival, but survival with a maximum amount of pleasure and control..."-- Iara Lee, Synthetic Pleasures.



Tuesday, November 9, 2010

Demise of Caprica

Y'know, at its best, SciFi produces 'great entertainment with provocative & thoughtful commentary' but its most common TV representative is outer-space-dress-up (space-wear or rubber/CG-aliens) regurgitating bland action tropes emphasizing the chosen-people-righteousness & always-win-in-the-end-superiority of humanity ( perhaps this is spelled 'SyFy' ? No offense to the "Mansquito" or "OctoShark" art-films, of course )

Do you guys think the universal definition/meme for science-fiction is a "setting", as in futuristic-tech, robots, space-travel, etc? As if scifi doesn't require science, novel ideas, current speculations, just a "setting."

For any popular entertainment fiction, whether it be modern-action (24), historical-drama (Rome, Madmen), what is labeled 'science-fiction' (V, SG-U) or what is not (LOST, Fringe, The EvEnt), perhaps it'd be clearer if we dropped those labels and measured shows in terms of:

[] Good-Story/Writing = compelling characters with consistent choices, believable-turn-of-events, & satisfying-plotlines
[] Intriguing Ideas = speculative commentary about life today (Iraq-planet, religion, politics) or tomorrow (all the socio-enviromental whatifs...)
[] Audience-Appealing Setting = a place and time that thematically identifies the story...( Space-BSG turned off common folks, Mostly-Modern-Caprica turned off SyFy-setting-loving folks, Wild-Chinese-West-meets-Space Firefly turned off FOX Execs )
[] Plausibility = a measure of how possible the story is...from retellings of actual events ( JFK, W) to possible tech ( robots, virtual reality, genetic- ) to fantasy as we know it ( magic, Faster-than-Light-Travel ). ( Funny to consider that most spy shows, like James Bond are mostly science-fiction-setting in their spy-gadgets and fantasy in their bullet absorbing. )


-------------------------
Caprica was definitely good & bad:

On the good side, the show had cool ideas about religion & 'virtual-heaven', the future of A.I., exploring how no-consequence virtual-worlds could become rife w/ deviant sex & violence, how uploading your mind lets you 'live forever' and the consequences of it, a society with prejudices that are mirrored but different than ours (not racial/sexual-orientation issues, Caprican-Tauran ones), how terrorist recruitment of kids happens so easily, and the story of how humanity's cylon children were born into slavery.

On the bad side, the dialog had a fair amount of cringe-worthy moments, lots of character-contradictions (Greystone), attempts to comment on virtual-violence/sex/obsessions lamely turned into showing it for too long, there really wasn't one character that was a 'good/decent' person...other than moments of Sam Adama's Halatha humor, i really didn't like anyone (although evil Sister Clarice was a memorable Attia-like villain), and its overall vibe was distilled depression.

Oh well, back to Fringe/SGU