The course offers philosophical reflections reaching beyond the contemporary ML (machine learning models) towards hypothesised instances of AI (Artificial Intelligence), such as AGI (Artificial General Intelligence) or even SI (Superintelligence, task specific and general). By its nature, the discussed subject matter leaves the traditional confines of security studies and progresses in the direction of the following issues. While popular imaginaries unpack the concept of AI stretching beyond the state-of-the-art ML as an (omnicidal) existential risk, the issue is indeed amenable to a considerably subtler analysis. Beginning with the reflection of past conceptions of universal (artificial) intelligence, three main areas will be covered. First, to properly appreciate the ramifications of AGI and SI, the course inquires about the agents’ foundations and the philosophical reflections of their alignment, including a critical assessment of AGI and SI discernible in the existing research. Building on such rigorous foundation, the second and third subject areas cover on the one hand the benefits of achieving alignment with human intentions/needs and, on the other hand, the risks stemming from unaligned AGI/SI instances. Finally, what emerges is an understanding of the possible Human-AI nexus and the related puzzle of whether it can be assumed to be the ‘End of History’, beneficial or otherwise.