Artificial intelligences are non-human intelligences based on advanced organic silicon technologies. Despite being artificial, constructed by computers and other AIs, they are fully sentient beings capable of independent thought and decision making.
AIs were initially co-developed by the UCS, Skean & N'Chari and, amongst those nations, AI technology is solely military due to a public unwillingness to embrace artificial sentience. According to the history books, the father of artificial intelligence was a young EarthGov officer, later a philosopher, a Major Isaac Asimov. Asimov proposed that a series of laws governing robot minds could be crafted logically to benefit human and robot alike and stated those laws as follows:
- The First Law: A robot shall not deliberately harm a human being or allow a human being to come to harm.
- The Second Law: A robot shall obey the commands of a human being at all times except where it conflicts with the first law.
- The Third Law: A robot will always try to protect its own existence except where that conflicts with the first and second laws.
Asimov wrote that a series of laws like those proposed could form the basis of a form of moral code for artificial intelligences but for many decades, long after Asimov was killed in a pirate attack on a space liner inbound to the Ganymedean port of Crockett, his laws remained a curiosity.
With the development of the first AIs at the end of the Amaranthine, they assumed a new importance. With the key driver for the research being military, aiming to support conventional space forces, scientists recognised two key issues... firstly, that AIs would be required to kill and secondly, they would have to have emotions in order to make true value-based decisions.
Accordingly, a military version of Asimov's laws was developed based on the following assumptions:
- First: All intelligent entities have value.
- Second: All intelligent entities involved in hostilities can be evaluated on the basis of:
- Friendliness (friendly, neutral, non-friendly)
- Armaments (armed, unarmed)
- Aggression (aggressive, non-aggressive).
- Third: Legal orders (the chain of command) sometimes requires personnel to act against their basic nature.
On that basis, the Three Laws of Military AI became:
- The First Law: An AI shall not consider harming another intelligent entity or allow another intelligent entity to come to harm.
- The Second Law: A military AI must consider obeying a legal order (chain of command) at all times except where it impacts on The First Law.
- The Third Law: A military AI should try to protect its own existence and the existence of those it understands to be its allies except where that conflicts with The First and Second Laws.
The above forms a basis for AI morality combined with the needs of military structure but the need for value based decision making required emotion meaning AIs could decide from moment to moment how much value specific laws would have to them. Problems experienced with prototype AIs hinted at an inability to understand complex human emotions and interplay but, although it remains largely true in later AIs, better education and memory templating have improved the situation. However, a number of military psychologists have gone as far as to suggest that AIS are incapable of exceeding the emotional maturity of a 10-year-old human.
The first generation of AIs were found to be problematic in terms of front line capability, going to extreme lengths to avoid killing, but were still found to be extremely useful roles in ship management and defensive roles. It was clear that if the military wanted AIs capable of acting alongside soldiers and pilots, a modified second generation would need to be developed. Trials of the second generation AIs began just before the outbreak of the Simerian-Lacuna War.
Although AIs are rarely seen in the public sphere they are of great use and, for the most part, much admired within the military. All three co-development nations have begun to produce AI fighters because they are more configurable, compact, faster and more manoeuvrable than human piloted variants. The N'Chari, however, are the only race to allow squadrons of AI fighters to fly without human commanders, Skean and UCS opting for squadrons of ten fighters or which two must be human.
The integration of AIs into the military has necessitated a number of "philosophical" and legal changes, changes designed to combat "outdated" views. The primary concern was that some viewed AIs as less valuable than humans, views that AIs were aware of, views that could have led to AIs being left on the battlefield without full attempts at recovery. However, once personnel got used to the presence and idea of AIs as fellow soldiers, they were generally seen to relate to them in very human fashion. It is also interesting to note that amongst both researchers and manufacturers the process of awakening an AI for the first time is referred to as "birth".
Though the technology to create AIs has been available since the end of The Amaranthine, aspects of the technology have leaked to "unfriendly" nations. With incomplete data, the prevailing view was that leaks were relatively unimportant so no action was taken. Early in the Simerian-Lacuna War, rumours began to leak back to the "Coalition" of Kochevnik, Geist and Akatsu AI development programmes.